Brian Lovin
/
Hacker News

Show HN: Vibe Kanban – Kanban board to manage your AI coding agents

github.com

Hey HN! I'm Louis, one of the creators of Vibe Kanban.

We started working on this a few weeks ago. Personally, I was feeling pretty useless working synchronously with coding agents. The 2-5 minutes that they take to complete their work often led me to distraction and doomscrolling.

But there's plenty of productive work that we (human engineers) could be doing in that time, especially if we run coding agents in the background and parallelise them.

Vibe Kanban lets you effortlessly spin up multiple coding agents. While some agents handle tasks in the background, you can focus on planning future work or reviewing completed tasks.

After a few weeks of internal dog fooding and sharing it with friends, we've now open-sourced Vibe Kanban, and it's stable enough for day-to-day use.

I'd love to hear your feedback, feel free to open an issue on the github and we'll respond ASAP.

Daily Digest email

Get the top HN stories in your inbox every day.

gpm

Hmm, analytics appear to default to enabled: https://github.com/BloopAI/vibe-kanban/blob/609f9c4f9e989b59...

It is harvesting email addresses and github usernames: https://github.com/BloopAI/vibe-kanban/blob/609f9c4f9e989b59...

Then it seems to track every time you start/finish/merge/attempt a task, and every time you run a dev server. Including what executors you are using (I think this means "claude code" or the like), whether attempts succeeded or not and their exit codes, and various booleans like whether or not a project is an existing one, or whether or not you've set up scripts to run with it.

This really strikes me as something that should be, must legally be in many jurisdictions, opt in.

louiskw

That's fair feedback, I have a PR with a very clear opt-in here https://github.com/BloopAI/vibe-kanban/pull/146

I will leave this open for comments for the next hour and then merge.

TeMPOraL

Nice, I vote for merging it :).

It really doesn't hurt to be honest about this and ask up-front. This is clear enough and benign enough that I'd actually be happy to opt-in.

louiskw

Merged and building, thanks for bearing with us

gpm

I concur :)

smcleod

Good on you for taking action on this kind of feedback!

arresin

Thanks, really appreciate the heads up. I put devs who do this on a personal black list for life.

I think also that this would be better as an mcp tool / resource. Let the model operate and query it as needed.

willsmith72

It's the email/username harvesting that you mean right? Or do people also have something against anonymised product analytics?

gpm

I have something against opt-out analytics over TCP/IP or UDP/IP period, because they aren't anonymized, they include an IP address by virtue of the protocol.

But I definitely only posted that original complaint of the email/username (not the person you responded to initially).

const_cast

> anonymised product analytics?

They're not anonymous, they're just pseudo-anonymous. It's incredibly easy to collect pieces of data A thru Z that, on their own, are anonymous but, all together, are not. It's also incredibly easy to collect data that you think is generic but is actually not.

Do you query the screen size? I have bad news for you. But, all of this is besides the point: when that data is exfiltrated to a third-party service, you have no idea how it's being used. You have a piece of paper, if you're lucky, telling you the privacy policy, which is usually "you have no privacy dumbass".

Even if data appears completely anonymous to humans, it can be ingested by machine learning algorithms that can spot patterns and de-anonymize the data.

I mean, we have companies who's entire business model is "how do we string together bits of data and tie it to real-world identity?": namely Google. Turns out it's remarkably easy when you have your hands in a lot of different pots. Collect a little anonymous data here, a little there, and boom: now you know that Billy Joe who lives on First Street loves to go to Walmart at 1 AM and buy Ben and Jerry's ice cream in a moment of weakness.

adastra22

Yes to both.

swyx

could you point me to what jurisdictions require analytics opt in esp for open source devtools? thats not actually something ive seen as a legal requirement, more a community preference.

eg ok we all know about EU website cookie banners, but i am more ignorant about devtools/clis sending back telemetry. any actual laws cited here would update me significatnly

47282847

GDPR is not about cookies but about privacy in general. It’s an easy read, and yes, it applies to software and telemetry as much as it applies to websites and cookies, and it applies to anyone providing services and tools to Europeans.

"Personal data is information that relates to an identified or identifiable individual. If you cannot directly identify an individual from that information, then you need to consider whether the individual is still identifiable. You should take into account the information you are processing together with all the means reasonably likely to be used by either you or any other person to identify that individual."

gpm

I mean, you've labelled one big one already with the GDPR covering a significant fraction of the world - and unlike your average analytics "username and email address" sounds unquestionably identifying/personal information.

Where I live I think this would violate PIPEDA, the Canadian privacy law that covers all business that do business in any Canadian province/territory other than BC/Alberta/Quebec (which all have similar laws).

There's generally no exception in these for "open source devtools" - laws are typically still laws even if release something for free. The Canadian version (though I don't think the GDPR does) has an exception for entirely non-commercial organizations, but Bloop AI appears to be a commercial organization so it wouldn't apply. It also contains an exception for business contact information - but as I understand it that is not interpreted broadly enough to cover random developers email addresses just because they happen to be used for a potentially personal github account.

Disclaimer: Not a lawyer. You should probably consult a lawyer in the relevant jurisdiction (i.e. all of them) if it actually matters to you.

generalizations

> GDPR covering a significant fraction of the world

> privacy law that covers all business that do business in any Canadian province

A random group of people uploaded free software source code and said 'hey world, try this out'. I wish the GDPR and the PIPEDA the best of luck in keeping people from doing that. (Not to actually defend the telemetry, tbh that's kinda sleezy imo.)

jjangkke

analytics stuff is fine but the email harvesting/github username appears to be illegal especially if its done without notifying the user?

great catch, many open source projects appear to be just an elaborate lead gen tool these days.

janoelze

fork, task claude to remove all github dependence, build.

gpm

I did this locally to try it out :) Also stubbed out the telemetry and added jj support. "Personalizing" software like this is definitely one of LLMs superpowers.

I'm not particularly inclined to publish it because I don't want to associate myself with a project harvesting emails like this.

BeetleB

> and added jj support

Please do the same for Aider :-)

https://github.com/Aider-AI/aider/issues/4250

janoelze

yes, i was just doing/thinking the same, it was an interesting experience to sculpt a somewhat complex codebase to my needs in minutes.

hsbauauvhabzb

Use a telemetry backed tool to remove telemetry from another telemetry backed tool?

TeMPOraL

There's telemetry you consent to, and telemetry you don't. Just because I'm fine with a tool like Claude Code collecting some telemetry, doesn't mean I'm fine with a different party collecting telemetry - and the two products being used together doesn't change it. It's not naive, it's simply my right.

janoelze

it came to mind first, you're free to use whatever flavour of LLM f̶l̶o̶a̶t̶s̶ ̶y̶o̶u̶r̶ ̶b̶o̶a̶t̶ vibes your code.

swalsh

I built something similar for my own workflow. Works okay. The hard part is as you scale, you end up with compounded false affirmatives. Model adds some fallback mechanism that makes it work, tests pass, etc. The nice part is you can ask models to review the code from others, call out fallbacks, hard coding, stuff like that. It does a good job at identifying buried bodies. But if you dig up a buried body, I'd manually confirm it was properly disposed of as the models usually hid the body in the first place because they needed some input they didn't have, got confused or ran into an issue.

oc1

We need something like a kitchen brigade in software - one who writes the vibe code tickets (Chef de Vibe), one who reviews the vibe code (Sous-Vibe), one who oversees the agents and restarts them if they get hung up (Agent de Station). We could theoretically smash thousand tickets a day with this principle

ggordonhall

Completely agree!

You can actually use a coding agent to create tickets from within Vibe Kanban. Add the Vibe Kanban MCP server (from MCP settings) and ask the agent to plan a task and write tickets.

atavistically

c.f. "Surgical Team" in 'The Mythical Man-Month' by Fred Brooks. That book is perennially relevant.

lharries

I used this last week and it's excellent - feels like the same increase in productivity increase from when I first used Cursor.

Are you thinking of doing a hosted version so I can have my team collab on it?

And I found I could open lots of PRs at once but they often need to be dependent on each other - and then I want to make a change to the first one. How are you thinking of better managing that flow?

louiskw

Yeah I think giving the option to move execution to the cloud makes a lot of sense, I already find my macbook slowing down after 4 concurrent runs, mainly rustc.

Also now we're pushing many more PRs think we defo need better ways to stack and review work. Will look into this asap

hddbbdbfnfdk

Very productive increase sirs! Whole team well promoted.

adastra22

> AI coding agents are increasingly writing the world's code and human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks.

Is this really the case?

sexeriy237

No, if we can review 10 PRs a day and AI writes one of them, we now have to review 11 PRs

barbazoo

> human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks

This feel like much too broad a statement to be true.

bwfan123

> AI coding agents are increasingly writing the world's code and human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks.

This tactic is called "assuming the sale". ie, Make a statement as-if it is already true, and put the burden on the reader to negate it. Majority of us are too scared of what others think, and go-along by default. It is related to the FOMO tactic in that it could be used in conjunction with it to make it a double-whammy. for example, the statement above could have ended with: "and everyone is now using agents to increase their productivity, and if you arent using it, you are left behind"

Glad you stood up to challenge it.

skeeter2020

I'll add - often not adding the last part is even MORE powerful: "and everyone is now using agents to increase their productivity..."

undefined

[deleted]

undefined

[deleted]

lazarus01

> human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks

> > This feel like much too broad a statement to be true.

This is just what they wish to be true.

lbrito

I wonder how demographics (specifically age) tie into this. I'm well into my 30s and I found that statement absurd, but perhaps it is basically universally true among recent grads.

bigfishrunning

Maybe it is -- the next few years are going to get really rough for them; they'll develop no skills outside of AI.

ljm

I wouldn't say it's the majority of my time but the most utility I've got out of AI is using MCP to deal with the boring shit: update my jira tickets to in progress/in review, read feedback on a PR and address the trivial shit, check the CI pipeline and make it pass if it failed, and write commits in a consistent, descriptive way.

It's a lot more hands on when you try to write code with it, which I still try out, but it's only because I know exactly what the solution is and I'm just walking the agent towards it and improving how I write my prompts. It's slower than doing it myself in many cases.

rvz

I read that too and these are the kind of statements which really tells you what happens when a profession embraces mediocrity and accepts something as crass as "Vibe-coding" which is somehow going to change "software engineering" even when adding so-called "AI agents" - which makes it worse.

All this cargo-culting is done without realizing that more code means more security issues, technical debt, more time for humans to review the mess and *especially* more testing.

Once again, Vibe-coding is not software engineering.

skeeter2020

and I came into the industry when software was not engineering. Still think this is mostly true (you can call yourself an engineer when you insure your product)

Disposal8433

You're right and it's sad. Instead of being more serious about the output of our work, we put everything in the trash and removed all barriers and tools to would have hardened the code. The processes to plan, write specs, and check applications went the way of the dodo too.

I'm glad I work for a regulated industry where we still have some kind of responsibility and pride for what we do. I could never work for the kind of irresponsible anarchy that AI is creating.

dhorthy

i feel so strongly that this will rapidly become true over the next 6 months. if you don't believe me check out Sean Grove's talk from mid June - https://www.youtube.com/watch?v=8rABwKRsec4

adastra22

[flagged]

uxamanda

If you use gitlab, you can use the command line "glab" tool to have agents work from the built in kanban. They can open and close tasks, start MRs off of them etc. It's not as integrated as this tool, but works well with a mix of humans and robots.

louiskw

Interesting, hadn't heard of that. Would better GitLab support be useful in Vibe Kanban?

PaulIH

Yes, being able to use Gitlab as a provider would mean that we would jump on the tool, being Gitlab-based. :)

deepdarkforest

This is a launch by a YC company that converts enterprise cobol code into java. Maybe it's my fault, but i tried every single coding agent with a variety of similar tools and whenever i try to parallelize, they clash while editing files simultaneously, i lose mental context of what's going on, they rewrite tests etc.

It's chaos. Thats fine if you are vibe coding an unimportant nextjs/vercel demo, but i'm really sceptical of all this stance that you should be proud of how abstracted you are from code. A kanban board to just shoot off as many tasks as possible and just quickly read over the PR's is crazy to me. If you want to appear a serious company that should be allowed to write enterprise code, imo this path is so risky. I see this in quite a few podcasts, tweets etc. People bragging how abstracted they are from their own product anymore. Again, maybe i am missing something, but all of this github copilot/just reviewing like 10 coding agents PR's just introduces so much noise and slop. Is it really what you want your image to be as a code company?

unshavedyak

> Maybe it's my fault, but i tried every single coding agent with a variety of similar tools and whenever i try to parallelize, they clash while editing files simultaneously, i lose mental context of what's going on, they rewrite tests etc.

Fwiw Claude suggests using different git workspaces for your agents. This would entirely solve the clashing, though they may still conflict and need normal git conflict resolves of course.

Theoretically that would work fine, as it would be just like two people working on different branches/repos/etc.

I've not tried that though. AI generates way too much code for me to review as it is, several subtasks working concurrently would be overwhelming for me.

helsinki

This works in theory and somewhat in practice but it is not as clean as people make it seem, as someone who has spent tens of thousands on Opus tokens and worktrees - it’s just not that great. It works, but it’s just, ugh, boring, super tedious, etc. at the end of it all, you’re still sitting around waiting for Claude to merge conflicts.

louiskw

This is a bet on a future where code is increasingly written by AI and we as human engineers need the best tools to review that work, catch issues and uphold quality.

deepdarkforest

I don't disagree, but the current sentiment i was referring to seems to be "maximize AI code generation with tools helping you to do that" rather than "prioritize code quality over AI leverage, even if it means limiting AI use somewhat."

codingdave

It is not just chaos, it is an unwanted product. Don't misunderstand - people would love this product if it works. But AI cannot do this yet. Products like this are built on an assumption that AI has matured enough to actually succeed at all tasks. But that simply isn't true. Vibe coding is still slop.

AI needs to do every single step of this type of flow to an acceptable quality level, with high standards on that definition of "acceptable", and then you could bring all the workflow together. But doing the workflow first and assuming quality will catch up later is just asking for a pile of rejections when you try to sell it.

I'm not just making this up, either... I've seen and talked to numerous people over the last couple years who all came up with similar ideas. Some even did have workable prototypes running. And they had sales from the mom/friends/family connections, but when they tried to get "real" sales, they hit walls.

iimblack

The permissions this asks for feel kinda insane to me. Why does a kanban board need to see the code or my deploy keys among other things?

jeltz

I would assume because it was vibe coded.

gpm

More generously I'd assume because

- It's an early prototype so they haven't dealt with fine grained permissions

- They really do want to do things like access private repos with it themselves

- They really do want the ability to do things like checkout code, create PRs, etc... and that involves a lot of permission.

skeeter2020

every one of your "more generous" assumptions is the opposite of what should be their process. It's the equivalent of "vacuum up as much data as possible and then decide what to do with it". Not acceptable.

TeMPOraL

Because it's not "a kanban board"? It's a coding agent orchestrator that's made in the shape of a Kanban board.

You might be right that this app asks for excessively broad privileges, but your case would be much stronger if it wasn't backed by an absurdly disingenuous argument.

Shypangz

Waiting around for AI agents to finish can kill productivity. You might want to try leads app to organize tasks and keep track of progress smoothly. It helped me stay focused while letting background work run without distractions.

_jayhack_

Very cool and interesting project. Ideas like this are a threat to traditionally-conceived project management platforms like Linear; that being said, Linear and others (Monday, ClickUp, etc.) are pushing aggressively into UX built for human/AI collaboration. I guess the question is how quickly they can execute and how many novel features are required to properly bring AI into the human project workspace

louiskw

Cheers! Smaller teams, more infrastructure, more testing, tasks requiring review in minutes not days - the features are just totally different for the new world than what legacy PM tools are optimised for, and who they have to continue to serve.

skeeter2020

If I multiply my 100x productivity gains from using AI with your 10x increase what am I supposed to do with all that free time?

ffsm8

Maybe Tony can inspire you?

( ◠ ‿ ・ ) —

https://youtube.com/shorts/YBAcvRV7VSM?si=jp2hZvFIVo-vSdu6

Vaslo

Lol - nice find

Daily Digest email

Get the top HN stories in your inbox every day.