How Our Hackathon Bot Became a Company-Wide Knowledge Engine

Using OpenWebUI and AWS Bedrock, we built an internal chatbot that helps teams move faster—with answers grounded in source-of-truth.
What Started as a Hackathon Became Our Second Brain
A few months ago, I wasn’t asked to build a company-wide AI assistant. The request was smaller: could I help the growth team analyze transcripts from dozens of user interviews?
That simple ask sparked a hackathon project and became one of the most impactful internal tools we’ve launched at Insify. Today, that prototype has evolved into a secure, AI-powered assistant that acts as a second brain across the company, helping new hires get up to speed, ops teams answer customer questions faster, and marketers surface product facts on demand.
But it didn’t happen overnight.
Why Not Just Use ChatGPT?
That was the first question everyone asked and it’s a fair one.
Public tools like ChatGPT and Gemini are powerful, but for our use case, they came with two major deal-breakers:
Data Privacy: We couldn’t risk piping sensitive customer data or proprietary docs into a third-party model.
Verifiability: For ops and compliance, AI hallucinations aren’t a minor bug—they’re a business risk. We needed answers backed by sources we trust and control.
So this became a classic build-vs-buy decision. We decided to build. Not from scratch—but on open and secure foundations we could control:
Frontend: OpenWebUI – an open-source chat interface with SSO support
Backend: AWS Bedrock – allowing us to plug in different models and compare
Data Layer: AWS Bedrock Knowledge Bases – letting us embed internal documents into an AI-accessible index (RAG)
This setup gave us the speed of managed infrastructure and the precision of a closed loop. All while keeping cost in mind. Using AWS Bedrock allowed us to have access to the latest and greatest LLMs in a secured manner and to pay based on the usage.
This saved us to trouble of hosting those LLMs ourselves and paying for expensive GPU-powered machines to run them.
Making Lemonade Out of Lemons
Of course, things didn’t just “click.”
Out of the box, most integrations didn’t meet our bar. So we had to build around them:
Google Drive? No native support in Bedrock. I built a custom Lambda function that syncs and sanitizes docs into S3.
Excel Files? Our spec sheets were unreadable. I wrote a converter that slices each file into tab-level CSVs and injects metadata so the AI knows how to interpret it.
Confluence? Even the official connector failed with tables and tags. I built a more robust parser to handle it.
Each blocker was frustrating. But I believe in making lemonade out of lemons. These workarounds became an edge—not a compromise. They gave us full control over quality and context.
The Hardest Part? Getting People to Come Back
Our first internal rollout wasn’t perfect.
The early version was too slow. Queries would lag. Users lost patience—and stopped using it. That was a tough moment. So I rebuilt the backend, optimized performance, and then had to do the harder part: rebuild trust. I demoed in team meetings, shared quick videos on Slack, and partnered with domain experts to validate answers.
Now, usage is climbing every week. It’s becoming a go-to tool for:
New Joiners — “How do I submit expenses?” → Link to the policy.
Customer Ops — “What’s the rule for Disability Insurance refunds?” → Contextual answer, mid-chat.
Marketing — “Where’s the latest AOV pricing update?” → Instant document reference.
It’s Not a Side Project Anymore
What started as a hackathon experiment is now something much bigger. It’s evolving into shared infrastructure, something we rely on every day.
This assistant isn’t just answering questions. It’s helping teams move faster, onboard smarter, and stay aligned. I see it becoming as natural as Slack or Google Drive: you don’t think about using it, you just do.
We’re already seeing it shift how people work. Ops teams can stay in the flow while chatting with customers. Growth teams are getting insights faster. And new hires don’t have to dig—they just ask.
That’s the vision: an internal AI that quietly clears friction, sparks new ideas, and gives every Insifyer a bit more leverage. Not as a shiny add-on, but as a foundational part of how we work.
And the best part? We’re just getting started.
What This Says About Our Engineering Culture
This project wasn’t on any roadmap. I saw a need, and I was trusted to solve it.
That’s what I value about working at Insify. We’re big enough to do things right—invest in security, build with scale in mind, review each other’s code. But we’re small enough to move fast, experiment, and own problems end-to-end.
Our tech team is diverse, high-trust, and highly collaborative. We hold weekly deep-dive demos. We reuse each other’s building blocks instead of reinventing the wheel. And when new technologies emerge—like AI tooling—we actually test them, pick what works, and ship.
We’re not chasing hype. We’re solving real problems.
We’re Hiring
If you like building things that make real people’s jobs easier—come join us.
We’re looking for engineers who want autonomy, challenge, and the chance to shape how AI actually gets used in the workplace.