Get the top HN stories in your inbox every day.
primitivesuave
Last year, I made a YouTube documentary series showcasing the prolific corruption in a small city government. I downloaded all the city government meetings, used Whisper to transcribe them, and then set up a basic RAG so I could query across a decade of committee meetings (around 1 TB of video). Once I got the timestamps that I'm interested in, I then have to embark on a tedious manual process of locating the file, cutting out a few seconds/minutes from a multi-hour video, and then order all the clips into a cohesive narrative.
These seem like problems that LLMs are especially well-suited for. I might have spent a fraction of the time if there was some system that could "index" my content library, and intelligently pull relevant clips into a cohesive storyline.
I also spent an ungodly amount of time on animations - it felt like "1 hour of work for 1 minute of animation". I would gladly pay for a tool which reduces the time investment required to be a citizen documentarian.
adishj
hey, thanks for sharing about your documentary series. would love to check it out if you don't mind linking it!
we don't yet support that volume of footage (1TB), however if you'd like to try this at a smaller scale, you can already do this today with the Rough Cut tile — simply prompt it for the moments that you're interested in (it can take visual cues, auditory cues, timestamp cues, script cues) and it will create an initial rough cut or assembly edit for you.
I'd also recommend checking out the new Motion Graphics tile we added for animations. You can also single-point generate motion graphics using the utility on the bottom right of the timeline. Let me know if you have any questions on that.
hypnagogicjerk
An additional suggestion for OP, working with large video archives:
- Batch transcribe your videos to smaller proxy files preserving the same file names (to allow easy re-linking to full quality media later) - Upload proxys to Mosaic - Do your Agentic rough-cut with Mosaic - Export EDL or NLE project file - In NLE, Re-link proxy media to full-quality video & render locally.
To Mosaic:
I need to look deeper at your project, but support for EDL export (Avid, Premiere, Final Cut compatible, as well as commercial grading and conform software workflows) and upload/management of proxy media could be helpful additional features.
adishj
Hey there! We already support XML exports to DaVinci Resolve, Final Cut Pro, and Premiere Pro!
We also do transcoding of all uploaded files to lower-res proxies, which can be re-linked when brought back into a more traditional NLE.
primitivesuave
Absolutely - the channel is called "Dolton Documentaries" on YouTube. I'll definitely check out the features you mentioned, and am super excited to see where this goes!
breadislove
you should check mixedbread out. we support indexing multimodal data and making data ready for ai. we are adding video and audio support by the end of the year. might be interesting for the OP as well.
we have couple investigative journalists and lawyers using us for a similar usecase.
adishj
curious how does this compare to something like Memories.ai when it comes to video in particular?
robotswantdata
Gemini 3 would rip through that problem, but equally you could slice the video with existing open source tooling such as FFMPEG then combine with blender for the video curation. Gemini 3 could probably write you the workflow as well.
mauflows
What part would Gemini do well at? What would you feed it?
cjbarber
I think this is a great endeavor. I was thinking about a channel that I like watching on YouTube. They travel to exotic places by boat and film themselves, nature documentary style. To make good videos requires going to these places, a ton of filming, AND a ton of editing. They put out a video every 2 weeks or so on their trips. I imagine the editing is the hard part.
This is a long winded way of saying that I think creators need what you're making! People who have hours of awesome footage but have to spend dozens of hours cutting it down need this. Then also people who have awesome footage but aren't good at editing or hiring an editor, same thing. I'd love to see someone solve this so that 90th percentile editing is available to all, and then it can be more about who has the interesting content, rather than who has the interesting content and editing skills.
adishj
thanks! Mosaic can already do the rough cuts for you — so you can upload all your footage from your travel, and prompt it to "make a 2 minute highlight reel of your trip to Japan", for instance.
soon, we also plan to incorporate style transfer, so you could even give it a video from the channel you enjoy watching + your raw footage, and have the agent edit your footage in the same style of the reference video.
mrbluecoat
> you can upload all your footage from your travel, and prompt it to "make a 2 minute highlight reel of your trip to Japan"
In relation to the demo requests below, I think this would be a good example of how an average person might use your platform.
adishj
for a demo, check out this one that I put together using 81 clips from a skydiving trip we took in Monterey, CA:
https://edit.mosaic.so/links/c51c0555-3114-45f4-ab8f-c25f172...
moinism
Hey, this is super cool. congrats on the product and the launch!
I'm building something exactly similar and couldn't believe my eyes when I saw the HN post. What i'm building (chatoctopus.com) is more like a chat-first agent for video editing, only at a prototype stage. But what you guys have achieved is insane. Wishing you lots of success.
to healthy competition!
adishj
thank you! chatoctopus looks pretty cool, I'm trying it out right now!
how did you find the chat-first interface to work out for video? what we found is that the response times can be so long that the chat UX breaks down a bit. how are you thinking about this?
adishj
looks like I got a network error
ack210
I just signed up for a Creator plan, but it looks like the automated "Thank you for being a Mosaic Creator" email going out is not configured correctly. Instead of having my company name, it referenced a different business name and description (that seems to exist/be accurate, so not a placeholder).
adishj
Hey! Thanks for calling this out — looking into what happened here & fixing right now.
adishj
This has been fixed now.
cube00
Could you please expand on how this happened and what else was at risk?
djeastm
Move fast, break things... like privacy regulations.
ansc
Woah, yikes.
Forgeties79
> We got frustrated trying to accomplish simple tasks in video editors like DaVinci Resolve and Adobe Premiere Pro. Features are hidden behind menus, buttons, and icons, and we often found ourselves Googling or asking ChatGPT how to do certain edits.
Hidden behind a UI? Most of the major tools like blade, trim, etc. are right there on the toolbars.
> We recorded hours of cars driving by, but got stuck on how to scrub through all this raw footage to edit it down to just the Cybertrucks.
Scrubbing is the easiest part. Mouse over the clip, it starts scrubbing!
I’m being a bit tongue in cheek and I totally agree there is a learning curve to NLE’s but those complaints were also a bit striking to me.
adishj
hey! You're right that most of the basic tools like splitting / trimming are available right in the timeline. but things like adding a keyframe to animate a counter, for instance, I had no idea where to go or how to start.
Scrubbing is easy enough when you have short footage, but imagine scrubbing through the footage we had of 5 hours of cars driving by, or maybe a bunch of assets. This quickly becomes very tedious.
Forgeties79
Hey I just wanted to come back and be clear that yeah I was being tongue in cheek, but looking back at it comes off as a little snarky/“this isn’t even a thing!” and I’m sorry for that - what you built is really cool and I’m excited to try it out.
Good luck out there!
adishj
no worries at all. compared to some other comments in this thread, I didn't find your tone snarky at all. I appreciate your engaging with the conversation and the thread :)
Forgeties79
I don’t need to imagine, I do it haha but again I was being tongue in cheek. I personally would love an effective tool that can mark and favorite clips for me based on written prompts. Would save me an awful amount of time!
adishj
curious — what kind of content do you edit?
andrewmlevy
obligatory https://news.ycombinator.com/item?id=9224
Forgeties79
Like I said, the description of some of the issues was just kind of funny to me - I think this could be a potentially very useful tool.
Do you think this is the next Dropbox?
adishj
next dropbox? lets go!
kul
Can it work for this use-case? I have lots of 15 seconds to 1 min duration videos) of my kids and want to upload them all (let's say 10 videos) and have the agent make a single video with all the best bits of them?
adishj
yes! you can upload as many videos as you want (file limits currently are at 20GB and 90 minutes, per file). then I'd recommend using either the Rough Cut tile or the Montage tile to stitch them all together. In those tiles, you can prompt particular visual cues in terms of how you want the videos to be combined. Let me know if any questions.
sails
I’ve had a lot of fun with Remotion and Claude Code for CLI video editing. I’ve been impressed with how much traditional video editing I can manage.
I will be checking this out!
adishj
that's super interesting — what kind of things have you done with remotion and Claude Code?
they're very powerful, when you put them together, it almost feels like Cursor for Video Editing
sails
Mostly using it for technical marketing/explainer videos eg https://x.com/mattarderne/status/1987441582413345016
hypnagogicjerk
Interested in your workflow @sails
sails
Posted a video in the thread, it’s pretty rudimentary (Claude code does everything) at the moment but I think this has a lot of possibilities.
shraey_92
I like the tile-based workflow approach. I’m curious, is integration with tools like 11labs/cartesia or HeyGen on the cards? It would make it much easier to produce influencer-style POV/first-person content using digital avatars and cloned voice-overs.
Also, do you have an API available to trigger workflows programmatically?
homeonthemtn
These comments real sus.
dang
Yes, a bunch of positive comments from accounts without much posting history appeared in this thread. I assume that the OP's friends got wind of their launch.
We tell founders to avoid that (scroll down to the bold part of https://news.ycombinator.com/yli.html for how we try to scare YC founders into not doing it!) - but to be fair, (1) this is not always easy to control, and (2) people posting such comments think they're helping and don't have enough experience of HN to realize that it has a counter-effect.
I'm going to move the overly sus ones to a collapsed stub now. (https://news.ycombinator.com/item?id=45988584)
adishj
thanks dan
adishj
i agree, things are a bit too kind. give me some more feedback.
rsancheti
And what’s the plan for determinism? For repeat workflows it’s important that the same pipeline produces the same cut each time. Are node outputs consistent or does the model vary run to run?
adishj
since we're building on top of LLMs which are by nature probabilistic, you won't produce the exact same frame-level cut each time, but of course there is still determinism in the expected outputs
for example, if you have a workflow setup to create 5 clips from a podcast and add b-rolls and captions and reframe to a few different aspect ratios, any time you invoke this workflow (regardless of which podcast episode you're providing as input), you'll get 5 clips back that have b-rolls, captions, and are reframed to a few different aspect ratios
however, which clips are selected, what b-rolls are generated, where they're placed — this is all non-deterministic
you can guide the agent via prompting the tiles individually, but that's still just an input into a non-deterministic machine
satvikpendem
Or just let the user adjust the seed and temperature themselves, or hide it under a checkbox that says deterministic with your chosen seed and temperature.
adishj
good point — we could enable these more granular-level knobs for users if it seems to be something people want
jaccola
Very cool. It definitely feels to me that the power of pro tools should be available to more people with AI.
Would have been nice if there was a killer demo on your landing page of a video made with Mosaic.
adishj
that's our perspective as well.
a lot of tooling is being built around generative AI in particular, but there's still a big gap for people that want to share their own stories / experiences / footage but aren't well-versed with pro tools.
valid feedback on the landing page — something we'll add in.
bluelightning2k
The problem is, any video demo of a tool like this is just an entirely unrelated video.
adishj
can you clarify what you mean here? check out this demo video: https://screen.studio/share/SP7DItVD
zkmon
I just clicked the link and encountered a non-scrollable, dark, fixed content pane with loads of flickering images and scrolling text with random font sizes without much meaning. I felt imprisoned, subjected to unexpected suffering, can't scroll away, got scared and raced for the window close button, and then breathed easy.
adishj
seems like the landing page is detracting from the main product, this is good feedback so thanks! For now, avoid the scaries and head directly to https://edit.mosaic.so to try the actual canvas interface
conductr
Since video is your thing, I feel like you need to just make a very edited demo reel and put all your energy into trying to get people to watch that video. Meaning, remove almost all text and bloat from the site and just show us all the cool stuff the product does for/to video editing. Distill it to 60-120 seconds and put that on your landing, hell put it on auto play if you want to, so long as it's clear that is the one thing I'm supposed to be paying attention to
adishj
yeah I think a demo reel of a BEFORE vs AFTER immediately somewhere in the hero even or right below it would be helpful
dang
I've put the /edit and /docs links in the first sentence above to soften the blow as well :)
pelagicAustral
They really managed to handcraft a unique user experience, that's for sure.
adishj
we did but the landing page seems to be detracting from it — head directly to https://edit.mosaic.so to try the actual canvas interface
deepspace
I had the same reaction. About what you would expect from a team steeped in the Tesla mindset.
dang
Please don't cross into personal attack. We're trying for the opposite on this site.
adishj
thanks for the feedback — you can head directly to https://edit.mosaic.so to try the actual canvas interface
Get the top HN stories in your inbox every day.
Hey HN! We’re Adish & Kyle from Mosaic (https://edit.mosaic.so, https://docs.mosaic.so/, https://mosaic.so). Mosaic lets you create and run your own multimodal video editing agents in a node-based canvas. It’s different from traditional video editing tools in two ways: (1) the user interface and (2) the visual intelligence built into our agent.
We were engineers at Tesla and one day had a fun idea to make a YouTube video of Cybertrucks in Palo Alto. We recorded hours of cars driving by, but got stuck on how to scrub through all this raw footage to edit it down to just the Cybertrucks.
We got frustrated trying to accomplish simple tasks in video editors like DaVinci Resolve and Adobe Premiere Pro. Features are hidden behind menus, buttons, and icons, and we often found ourselves Googling or asking ChatGPT how to do certain edits.
We thought that surely now, with multimodal AI, we could accelerate this process. Better yet, an AI video editor could automatically apply edits based off what it sees and hears in your video. The idea quickly snowballed and we began our side quest to build “Cursor for Video Editing”.
We put together a prototype and to our amazement, it was able to analyze and add text overlays based on what it saw or heard in the video. We could now automate our Cybertruck counting with a single chat prompt. That prototype is shown here: https://www.youtube.com/watch?v=GXr7q7Dl9X0.
After that, we spent a chunk of time building our own timeline-based video editor and making our multimodal copilot powerful and stateful. In natural language, we could now ask chat to help with AI asset generation, enhancements, searching through assets, and automatically applying edits like dynamic text overlays. That version is shown here: https://youtu.be/X4ki-QEwN40.
After talking to users though, we realized that the chat UX has limitations for video: (1) the longer the video, the more time it takes to process. Users have to wait too long between chat responses. (2) Users have set workflows that they use across video projects. Especially for people who have to produce a lot of content, the chat interface is a bottleneck rather than an accelerant.
That took us back to first principles to rethink what a “non-linear editor” really means. The result: a node-based canvas which enables you to create and run your own multimodal video editing agents. https://screen.studio/share/SP7DItVD.
Each tile in the canvas represents a video editing operation and is configurable, so you still have creative control. You can also branch and run edits in parallel, creating multiple variants from the same raw footage to A/B test different prompts, models, and workflows. In the canvas, you can see inline how your content evolves as the agent goes through each step.
The idea is that canvas will run your video editing on autopilot, and get you 80-90% of the way there. Then you can adjust and modify it in an inline timeline editor. We support exporting your timeline state out to traditional editing tools like DaVinci Resolve, Adobe Premiere Pro, and Final Cut Pro.
We’ve also used multimodal AI to build in visual understanding and intelligence. This gives our system a deep understanding of video concepts, emotions, actions, spoken word, light levels, shot types.
We’re doing a ton of additional processing in our pipeline, such as saliency analysis, audio analysis, and determining objects of significance—all to help guide the best edit. These are things that we as human editors internalize so deeply we may not think twice about it, but reverse-engineering the process to build it into the AI agent has been an interesting challenge.
Some of our analysis findings: Optimal Safe Rectangles: https://assets.frameapp.ai/mosaicresearchimage1.png Video Analysis: https://assets.frameapp.ai/mosaicresearchimage2.png Saliency Analysis: https://assets.frameapp.ai/mosaicresearchimage3.png Mean Movement Analysis: https://assets.frameapp.ai/mosaicresearchimage4.png
Use cases for editing include: - Removing bad takes or creating script-based cuts from videos / talking-heads - Repurposing longer-form videos into clips, shorts, and reels (e.g. podcasts, webinars, interviews) - Creating sizzle reels or montages from one or many input videos - Creating assembly edits and rough cuts from one or many input videos - Optimizing content for various social media platforms (reframing, captions, etc.) - Dubbing content with voice cloning and lip syncing.
We also support use cases for generating content such as motion graphic animations, cinematic captions, AI UGC content, adding contextual AI-generated B-Rolls to existing content, or modifying existing video footage (changing lighting, applying VFX).
Currently, our canvas can be used to build repeatable agentic workflows, but we’re working on a fully autonomous agent which will be able to do things like: style transfer using existing video content, define its own editing sequence / workflow without needing a canvas, do research and pull assets from web references, and so on.
You can try it today at https://edit.mosaic.so. You can sign up for free and get started playing with the interface by uploading videos, making workflows on the canvas, and editing them in the timeline editor. We do paywall node runs to help cover model costs. Our API docs are at https://docs.mosaic.so. We’d love to hear your feedback!