Genius Hacks 2026 — Myridius AI Innovation Hackathon
Myridius AI Innovation Hackathon

GeniusHacks

2026 Edition

Eight real challenges. Unlimited possibilities.
Bring your collective genius to the table and build something extraordinary that changes the way we and our clients work.

Key Dates — 2026
Registration Opens
3 April 2026
Now Open
Registration Closes
10 April 2026
Submission Deadline
30 April 2026
Final Evaluation
15 May 2026
Finale
Earn Your Place Among the Innovators
Compete for top-tier rewards and prestigious accolades that celebrate the most innovative thinkers.
Feature Your Work on the Global Stage
Transform your vision into a flagship Myridius demo. Top solutions receive the funding and mentorship needed to join our official GTM roadmap.
Global Synergy Beyond Boundaries
Unite with diverse talent across regions and functions. Leverage our collective expertise to build without limits, where the strength of the idea defines the outcome.

Eight Strategic Problems.
Real-World Impact.

Pick the problem that excites you the most and create a solution with the power to scale.

Problem 01
Smarter Test Prioritisation Using Historical Data
Mentor — Akhil Jha

When releasing software, teams run the same set of tests every time — regardless of what actually changed. This wastes hours and still misses bugs, because the tests chosen are based on habit and memory rather than evidence. High-risk areas of the system get the same attention as stable ones, and critical issues only surface after the release has gone out.

Expected Outcome
Use past bug reports and change history to predict which areas are most likely to fail, automatically select the most relevant tests, and cut testing time by at least 60% — while catching every serious issue before release.
Problem 02
AI-Assisted Planning & Work Estimation
Mentor — Naga Bhavirisetty

Every two weeks, delivery teams spend 3+ hours in planning sessions trying to agree on how much work they can take on. Estimates vary wildly between team members because everyone relies on memory and instinct. Data from hundreds of past planning sessions sits unused in project tools, and new team members have no basis for their estimates at all.

Expected Outcome
Automatically analyse past work patterns to suggest realistic estimates with confidence ranges, improve how accurately teams forecast what they'll deliver by at least 20 percentage points, and cut planning session time by more than half.
Problem 03
Predicting When a Dispute Will Escalate
Mentor — Abhishek Sharma

When someone files an insurance claim, a specialist reviews it and decides how to handle it — mostly based on experience and gut feel. There is no data-driven way to spot early on which cases are likely to turn into expensive legal disputes. By the time the warning signs are obvious, the window to reach an early resolution has already closed, and the cost has multiplied.

Expected Outcome
Analyse both structured claim data and written notes to score each case by escalation risk, correctly flag at least three in four high-risk cases early enough to act, and cut the time spent manually researching the right resolution path from 45 minutes to under 10 minutes per case.
Problem 04
Getting Ahead of a Surge in Incoming Requests
Mentor — Mishtert Thangaraj

When a major event happens — a storm, a flood, a large accident — insurance teams find out at the same time as everyone else: through the news. By then, requests are already piling up faster than the team can handle, response times blow past agreed targets, and customers are left waiting for days. There is no early signal to prepare before the wave hits.

Expected Outcome
Monitor public data sources and match unfolding events against existing coverage to generate an early alert and volume estimate within 2 hours of an event being detected — giving teams time to prepare before the surge begins.
Problem 05
Automating Physical Space Inspections with AI Vision
Mentor — Vijay Venkatesh

Businesses with physical locations rely on human inspectors to check that spaces meet brand and safety standards. These visits happen infrequently, take significant time, and produce inconsistent results — two people inspecting the same room often reach different conclusions. Problems go unnoticed for weeks between visits, and when issues are flagged, the reports are too vague for the team on the ground to act on.

Expected Outcome
Analyse images or video against a defined checklist, agree with a human inspector's assessment at least 85% of the time, and produce a specific, actionable report — shrinking the gap between a problem being spotted and someone being assigned to fix it from 5 days to the same day.
Problem 06
Recovering Lost Revenue from Fragmented Availability
Mentor — Nitin Gupta

Hospitality businesses often find themselves with availability that can't be sold — not because there are no guests, but because the available slots are fragmented across channels, dates, or categories in ways that make them unusable. Identifying and fixing these gaps is entirely manual, slow, and depends on individuals knowing where to look. Capacity sits idle while demand goes unmet.

Expected Outcome
Build a system that maps current availability, automatically identifies and recombines fragmented or unusable slots, and demonstrates a measurable improvement in usable capacity — with a clear visual comparison of the before and after state.
Problem 07
Making Smarter Decisions About Which Tools to Keep
Mentor — Kurt Wysock

Large organisations run hundreds of software tools — many of which do the same job, carry hidden risks, or cost more than anyone realises. Deciding which to keep, retire, or replace takes weeks of manual analysis across spreadsheets, contracts, and system inventories. By the time a recommendation is made, the data is out of date and decision-makers still aren't confident.

Expected Outcome
Bring together data from multiple sources to automatically surface duplicate tools, flag hidden cost and risk, identify upcoming contract decisions, and produce a clear, prioritised set of recommendations — reducing analysis time from weeks to under 30 minutes.
Problem 08
Keeping a Company's Shared Vocabulary Consistent
Mentor — Rohit Suri

Most large organisations maintain a shared glossary of business terms — a common language that teams use to understand data and reports. Over time, this glossary fills with duplicates, near-identical definitions, and conflicting descriptions that contradict each other. Nobody has an easy way to find these conflicts, let alone fix them systematically, so the shared vocabulary slowly becomes unreliable.

Expected Outcome
Use AI to detect semantically duplicate or conflicting entries in a business glossary, surface resolution suggestions for human review and approval, and demonstrate a measurable improvement in consistency — with a clear audit trail of every change made.

Mark your calendar.

Four milestones. Six weeks. Your window to build something extraordinary.

1
3 Apr
Registration Opens
2
10 Apr
Registration Closes
3
30 Apr
Submission Deadline
4
15 May
Final Evaluation

Our growing genius community

0
Total Submissions
0
Participants
0
Participating Locations
0
Teams Registered

Everything you need to build.

From cloud platforms to no-code tools — use whatever stack helps you ship the best solution.

Frequently asked questions

Who can participate in Genius Hacks 2026?
All Myridius employees globally can participate, regardless of technical background. We welcome both technical and non-technical team members.
Do I need coding experience?
Not at all. Teams can build solutions using no-code platforms like Retool, N8n, Microsoft Power Platform, Bubble, Glide, or Chatbase. The Functional Track is specifically designed to be accessible to non-technical participants.
What is the team size and composition?
Teams must have exactly 3 members. You can mix technical and non-technical backgrounds. Cross-regional teams are encouraged but not mandatory. If you register individually, our panel will help form teams.
What information is needed at registration?
You will need: Team name & problem selection, Problem statement (200 words), Proposed solution approach (300 words), Team member profiles (name, role, location), and Expected outcomes & impact.
How will solutions be evaluated?
Primary criteria (70%): Use of AI tools & innovation (25%), Technical implementation (25%), Business impact (20%). Secondary criteria (30%): Presentation quality (15%), Team collaboration (10%), Future scalability (5%).
What platforms and tools can we use?
Any combination of cloud computing (AWS/Azure/GCP), GenAI platforms (OpenAI, Claude, Gemini), development stacks, and no-code platforms. You can also use OpenLLaMA, crewAI, LangFlow, Salesforce, and ServiceNow.
What is a mentor's role in this?
Mentors help your team move faster by providing deep expertise in your chosen problem. They challenge your assumptions, offer expert feedback on your approach, and guide you to translate your ideas into a high-impact, measurable outcome grounded in reality.
What happens to winning solutions?
Winning solutions will be considered for integration with Evoq offerings and client presentations. Solutions with sizeable market potential will receive funding and mentorship to develop into go-to-market offerings. There is also opportunity for IP protection.
Can I change team composition after registering?
Minor changes may be accommodated before the registration close date of 10 April. Contact the support team through the dedicated Teams channel for assistance.
Are there restrictions on external APIs or services?
No major restrictions. Teams can use external APIs and services and integrate with existing systems as needed. Just ensure you have proper access and permissions for any services used.
Registrations open 3 April 2026

Your idea could be
the one that ships.

Every year, solutions from Genius Hacks make it into real products and client demos. This year's challenges are bigger, the opportunity is broader, and the stage is yours. Don't sit this one out.

Need help? We've got you.

ResponseWithin 24 hours