Brian Lovin
/
Hacker News

Launch HN: Hyprnote (YC S25) – An open-source AI meeting notetaker

Hi HN! We're Yujong, John, Duck, and Sung from Hyprnote (https://hyprnote.com). We're building an open-source, privacy-first AI note-taking app that runs fully on-device. Think of it as an open-source Granola. No Zoom bots, no cloud APIs, no data ever leaves your machine.

Source code: https://github.com/fastrepl/hyprnote Demo video: https://hyprnote.com/demo

We built Hyprnote because some of our friends told us that their companies banned certain meeting notetakers due to data concerns, or they simply felt uncomfortable sending data to unknown servers. So they went back to manual note-taking - losing focus during meetings and wasting time afterward.

We asked: could we build something just as useful, but completely local?

Hyprnote is a desktop app that transcribes and summarizes meetings on-device. It captures both your mic input and system audio, so you don't need to invite bots. It generates a summary based on the notes you take. Everything runs on local AI models by default, using Whisper and HyprLLM. HyprLLM is our proof-of-concept model fine-tuned from Qwen3 1.7B. We learned that summarizing meetings is a very nuanced task and that a model's raw intelligence (or weight) doesn't matter THAT much. We'll release more details on evaluation and training once we finish the 2nd iteration of the model (still not that good we can make it a lot better).

Whisper inference: https://github.com/fastrepl/hyprnote/blob/main/crates/whispe...

AEC inference: https://github.com/fastrepl/hyprnote/blob/main/crates/aec/sr...

LLM inference: https://github.com/fastrepl/hyprnote/blob/main/crates/llama/...

We also learned that for some folks, having full data controllability was as important as privacy. So we support custom endpoints, allowing users to bring in their company's internal LLM. For teams that need integrations, collaboration, or admin controls, we're working on an optional server component that can be self-hosted. Lastly, we're exploring ways to make Hyprnote work like VSCode, so you can install extensions and build your own workflows around your meetings.

We believe privacy-first tools, powered by local models, are going to unlock the next wave of real-world AI apps.

We're here and looking forward to your comments!

Daily Digest email

Get the top HN stories in your inbox every day.

itsalotoffun

I'm always amazed at these relatively tiny projects that "launch" with a "customers" list that reads like they've spent 10 years doing hard outbound enterprise sales: Google, Intel, Apple, Amazon, Deloitte, IBM, Ford, Meta, Uber, Tencent, etc.

Aurornis

This one is especially bad because I doubt all of those companies allow employees to install unapproved software that records meetings and uses so many 3rd party APIs

The social proof logo list is an old scheme on the growth hacking checklist. There was a time when it was supposed to mean the company had purchased the software. Now it just means they knew someone who worked at those companies who said they’d check it out.

At this point, when I visit a small product’s landing page and see the logo list the first thing I think of is that they’re small and desperate to convince me they’re not.

snthpy

Wow, I had no idea that the bar was that low. That's ridiculous. I think I'll follow your approach from now on.

herval

We’re firmly in a world where “cheat on everything” is an acceptable business, startups that were hacked together in a week at YC claim they have SOC2 and vibecoded GPT wrappers claim they “trained a model”. Shameless lying took over tech, and if anyone catches you lying, you double down, make a scene and a bunch of podcasts will talk about you. Free advertising!

Of course, dishonesty is as old as time, but these last couple of years have been hard to watch…

quantum_state

People learn fast from the Whitehouse

paulhart

Yeah, IBM employee here, not speaking on behalf of the company, own opinions etc. The odds this is approved for employee use are essentially zero.

johntopia

have to admit that we did some logo plays. but our users are really all over the place and just wanted to show it off! i am not sure how it looked but that's why we didn't use terms like "teams" or "customers" to be honest while showing some validation.

petesergeant

> we did some logo plays

Help me understand what this means

abxyz

The most honest version is the company is paying for the tool. The most stretched version I’ve seen is a former employee of a company uses the tool in a personal capacity. Most commonly for newly launched things it means someone with an @company email has tried the tool (even if they didn’t pay). You could, for example, set up a waitlist and then let anyone with a logo-worthy email in.

johntopia

to show that we are acknowledged by many users from various orgs. we listed users who talked to, but we do not know if they still use it as some of them are not reachable(lost contact). i am admitting that we wanted to seem official so that's why we had all these logos where our users are "from".

48terry

> i am not sure how it looked

Well, it looks a lot like you're playing word games to get clout-by-association that you don't necessarily deserve. That doesn't seem like something an authentic person (or people) would try to do. Are the other claims about your team and software equally unserious?

Lionga

"Logo play" is such a YCombinator word for Lie.

yaseer

It says "Our users are everywhere" and shows some logos for the companies these users are from.

If the users are from those companies, this is not lying.

If they added logos for companies their users are not from, it would be lying.

Adding a logo to your webpage has started to follow different patterns for the stage of the company.

Early stage companies show things like "people at X, Y, Z use our product!" (showing logos without permission), whilst later stage ones tend to show logos after asking for permission, and with more formal case studies.

They may not have asked for permission to show these logos, but that's not the same thing as lying.

yonl

Congrats on the launch. I never understood why an AI meeting notetaker needed sota LLMs and subscriptions (talking about literally all the other notetakers) - thanks for making it local first. I use a locally patched up whisperx + qwen3:1.7 + nomic embed (ofcourse with a swift script that picks up the audio buffer from microphone) and it works just fine. Rarely i create next steps / sop from the transcript - i use gemini 2.5 and export it as pdf. I’ll give Hyprnote a try soon.

I hope, since it’s opensource, you are thinking about exposing api / hooks for downstream tasks.

Aurornis

> I never understood why an AI meeting notetaker needed sota LLMs and subscriptions

I’m the opposite: If something is expected to accurately summarize business content, I want to use the best possible model for it.

The difference between a quantized local model that can run on the average laptop and the latest models from Anthropic, Google, or OpenAI is still very significant.

pollyrocket

For summarizing context, it's not that far off. I've summarized notes using Claude Sonnet 3.7 and Qwen3 8b, and there a difference, but not huge.

yujonglee

What kind of API/Hooks you expect us to expose? We are down to do that.

sjayasinghe

The ability to receive live transcripts from a webhook, including speaker diarization metadata would be super useful.

yujonglee

webhook to the localhost server, right?

headcanon

registering an MCP server and calling an MCP tool upon transcript completion (and/or summary completion) would help (check out actionsperminute.io for the vision there).

Calendar integration would be nice to link transcripts to discrete meetings.

yujonglee

That makes sense.

Please add more details here: https://github.com/fastrepl/hyprnote/issues/1203

For calendar, we have native Apple Calendar integration in MacOS.

satvikpendem

Can you share the Swift script? I was thinking of doing something similar but was banging my head against the audio side of macOS.

teiferer

Nice!

Would be great if you could include in your launch message how you plan to monetize this. Everybody likes open source software and local-first is excellent too, but if you mention YC too then everybody also knows that there is no free lunch, so what's coming down the line would be good to know before deciding whether to give it a shot or just move on.

yujonglee

For individuals:

We have a Pro license implemented in our app. Some non-essential features like custom templates or multi-turn chat are gated behind a paid license. (A custom STT model will also be included soon.) There's still no sign-up required. We use keygen.sh to generate offline-verifiable license keys. Currently, it's priced at $179/year.

For business:

If they want to self-host some kind of admin server with integrations, access control, and SSO, we plan to sell a business license.

teiferer

Does that mean the admin server is not open source?

thedevilslawyer

Another sso.tax candidate.

Let's actively not support software that chooses anti-security.

johntopia

totally fair concern. we’re actually on the same side when it comes to promoting good security practices like SSO.

the reason we’re gating the admin server under a business license is less about profiting off sso and more about drawing a line between individual and organizational use. it includes a bunch of enterprise-specific features (sso, access control, integrations, ...) that typically require more support and maintenance.

that said, the core app is fully open-source and always will be - so individuals and teams who don’t need the admin layer can still use it freely and privately, without compromising security.

we’ll keep listening and evolving the model - after all, we're still very early and flexible. appreciate the pushback.

(edit: added some more words to reinforce our flexibility)

polyaniline

I just tried to build on Linux and it keeps panicking because it requires dozen(s) of API keys. I was not expecting that from local first software.

yujonglee

That is not actually required. You can even set empty key for that.

Also Linux issue pointer: https://github.com/fastrepl/hyprnote/issues/67#issuecomment-...

polyaniline

I get an "EmptyToken" error if I leave keys like AXIOM_TOKEN empty. I'm sure I can remove the requirements in code, but it's just that this wasn't expected from reading the project description.

Anyway, thanks for your work and good luck!

hahajk

I'm also interested in learning about why the API keys are required to build.

yujonglee

There are only 2 api keys that is required to build (POSTHOG, SENTRY), and also not required to build in dev, only in release build.

I made it required to prevent accidentally ship app without any analytics/error tracking. (analytics can be opted out)

For ex, https://github.com/fastrepl/hyprnote/blob/327ef376c1091d093c...

EDIT: Prod -> release

vanschelven

You might want to reconsider ”no data ever leaves your machine" from the post :)

Given your target market, have you considered looking at Bugsink?[0] sentry compatible. still not local, but at least you won't have to additionally ask your customers to trust sentry/posthog.

Disclosure: that's me

[0] https://www.bugsink.com/

btown

How are you balancing accuracy vs. time-to-word-on-live-transcript? Is this something you're actively balancing, or can allow an end user to tune?

I find myself often using otter.ai - because while it's inferior to Whisper in many ways, and anything but on-device, it's able to show words on the live transcript with minimal delay, rather than waiting for a moment of silence or for a multi-second buffer to fill. That's vital if I'm using my live transcription both to drive async summarization/notes and for my operational use in the same call, to let me speed-read to catch up to a question that was just posed to me while I was multitasking (or doing research for a prior question!)

It sometimes boggles me that we consider the latency of keypress-to-character-on-screen to be sacrosanct, but are fine with waiting for a phrase or paragraph or even an entire conversation to be complete before visualizing its transcription. Being able to control this would be incredible.

yujonglee

It is more like ai model problem(then app logic. doing it more frequently will require more computation. Things like speculative decoding can help though).

Doing it locally is hard, but we expect to ship it very soon. Please join our Discord(https://hyprnote.com/discord) if you are interested to hear from us.

crashabr

Do you intend to reach feature parity with something like MacWhisper? I'd love to switch to something open source, but automated meeting detection, push to transcribe (with custom rewrite actions) are two features I've learned to love, beside basic transcript. I also enjoy the automatic transcription from an audio, video or a even a YouTube link.

But because MacWhisper does not store transcripts or do much with them (other than giving you export options), there are some missed opportunities: I'd love to be able to add project tags to transcripts, so that any new transcript is summarized with the context of all previous transcript summaries that share the same tag. Thinking about it maybe I should build a Logseq extension to do that myself as I store all my meeting summaries there anyway.

Speaker detection is not great in MacWhisper (at least in my context where I work mostly with non native English speakers), so that would be a good differentiation too.

johntopia

definitely planning to catch up to other tools - we ship FAST!

automated meeting detection - working on this. push to transcribe - want to understand more about this. (could we talk more over at our discord? https://hyprnote.com/discord)

if you're using logseq, we'd love to build an integration for you.

finally, speaker identification is a big challenge for us too.

so many things to do - so exciting!

dexterdog

> finally, speaker identification is a big challenge for us too.

But your home page makes it looks like you already have it. I just tried it in a 30-minute meeting with 20 people and it put the entire conversation under a single speaker, in a single paragraph.

yujonglee

I acknowledge that it's outdated. When we were writing the landing page, we thought we could ship the first version of speaker diarization/identification within a few days.

However, due to various challenges and priority changes, we haven't been able to do so yet. We'll update the landing page soon.

mentalgear

Looks great & kudos for making it local-first & open-source, much appreciated!

From a business perspective, and as someone looking also into the open-source model to launch tools, I'd be interested though how you expect revenue to be generated?

Is it solely relying on the audience segment that doesn't know how to hook up the API manually to use the open-source version? How do you calculate this, since pushing it via open-source/github you would think that most people exposed to it are technical enough to just run it from source.

yujonglee

I mentioned about the monetization plan in other threads! (search with 'license').

Hope that make sense

jihomie

Hello, I am a manager at a tech firm. Came across your product, and feeling this could be a thing I was looking for these days - recently got banned of some notetakers. Impressive demo video btw!

First of all I wanna try this on my personal device. How can I implement your product in my devices?

I also want to discuss some points (on-device, self-hosting, open-source, etc.) to persuade our compliance team, as well

johntopia

would love to support you! you can just download it from our website if you're using a mac.

jihomie

Thanks for the prompt response.

How about the Window(more of native Window OS than VM or WSL) case? What's your plan for Window version launch and support?

johntopia

windows will be launched in august! is your org mostly using windows? or is it just you?

p2hari

I just downloaded on mac M4 pro mini. I installed the apple silicon version and try to launch it and it fails. No error message or anything. Just the icon keep bouncing on the dock. I assumed it needs some privacy and screen recording and audio permissions and explicitly gave them, however still just jumps on the dock and the app does not open. (OS, mac sequoia 15.5)

yujonglee

Seems like

https://github.com/fastrepl/hyprnote/blob/d0cb0122556da5f517...

this is invalid on Mac mini. Should be fixed today.

johntopia

working on trying to identify the problem! could you come over to our discord where we could better support you? https://hyprnote.com/discord

yujonglee

That is very strange. Can you launch it from the command line and share what you got?

/Applications/Hyprnote.app/Contents/MacOS/Hyprnote

theodorewiles

Looks really cool - I noticed Enterprise has smart consent management?

The thing I think some enterprise customers are worried about in this space is that in many jurisdictions you legally need to disclose recording - having a bot join the call can do that disclosure - but users hate the bot and it takes up too much visibility on many of these calls.

Would love to learn more about your approach there

johntopia

yes, we’re rolling out flexible consent options based on legal needs - like chat messages, silent bots, blinking backgrounds, or consent links before/during meetings. but still figuring out if there's a more elegant way to do this. would love to hear your take as well.

theodorewiles

Please shoot me a note - I'm trying to figure this out for my enterprise now, would love to figure out a way to get you in / trial it out.

johntopia

can i send you a follow-up to the email that's on your profile?

nashashmi

I was talking about this a week ago. One person wanted to make a pdf tutorial of how to use a software. I asked him to record himself in teams and share his screen and have AI take notes. It will create a fabulous summary with snapshots of everything he is going over.

rushingcreek

Congrats on the launch! I'm very bullish on how powerful <10B-param models are becoming, so the on-device angle is cool (and great for your bottom line too, as it's cheaper for you to run).

Something that I think is interesting about AI note taking products is focus. How does it choose what's important vs what isn't? The better it is at distinguishing the signal from the noise, the more powerful it is. I wonder if there is an in-context learning angle here where you can update the model weights (either directly or via LoRA) as you get to know the user better. And, of course, everything stays private and on-device.

yujonglee

> How does it choose what's important vs what isn't?

The idea of Hyprnote is that you write chicken-scratch-raw note during the meeting(what you think is important), and AI enhance based on it.

On-device learning is interesting too. For example, Gboard: https://arxiv.org/abs/2305.18465

And yes - we are open to this too

Daily Digest email

Get the top HN stories in your inbox every day.