Get the top HN stories in your inbox every day.
lionkor
mstade
> - You can only run one LSP per file type, so your Rust will work fine, your C++, too, your Angular will not.
As a web developer that's an immediate deal breaker. I use Sublime today and being able to run multiple LSP servers per file is a huge boon, it turns a very capable text editor into a total powerhouse. The way it's set up in Sublime with configuration options that can be applied very broadly or very specifically, while having defaults that just works is also just incredible.
While I'm super pleased with Sublime and a happy paying customer since at least a decade, and at this rate may well be for another decase, I'm always keeping my ear to the ground for other editors if nothing else just to stay current. Zed's been looking pretty cool, but things like this will keep me from even just trying it. There's years of muscle memory and momentum built up in my editor choice, I'm not switching on whim.
Thank you very much for sharing this nugget of gold!
jermberj
That's ... uh ... not what a rug pull is. They're telling you plainly from the jump that they're going to eventually charge for it. Point taken on your wish to wait, that makes perfect sense.
santoshalper
They're just telling you it's not going to be $0.00. It could be $5/year, $20/mo or anything else. It's the gentleman's rug-pull.
I_complete_me
Since when did gentlemen pull rugs? It seems antithetical to the behaviour of what I understand by 'gentleman'.
lionkor
I would love to edit my comment to instead say "having the rug pulled from under my feet", which is the feeling I was expressing.
thejazzman
don't those mean the same thing?
(not arguing)
recently saw an old alfred hitchcock presents where the character does the cheesiest most absurd rug pull ... and the person was boom dead. i assumed that was the origin of the term
maxbrunsfeld
Just to clarify, you can run as many LSPs in a given file type as you want.
Common features like completions, diagnostics, and auto-formatting will multiplex to all of the LSPs.
Admittedly, there are certain features that currently only use one LSP: inlay hints and document highlights are examples. For which LSP features is multi-server support important to you? It shouldn't be too hard to generalize.
tekacs
From their overall FAQ:
> Q: Will Zed be free?
> A: Yes. Zed will be free to use as a standalone editor. We will instead charge a subscription for optional features targeting teams and collaboration. See "how will you make money?".
> Q: How will you make money?
> A: We envision Zed as a free-to-use editor, supplemented by subscription-based, optional network features, such as:
- Channels and calls
- Chat
- Channel notes
We plan to offer our collaboration features to open source teams, free of charge.
It seems to me that they're just going to charge for Zeta if they do, because it... costs them money to run.Unlike others (e.g. Cursor), they've opened it (and its fine-tuning dataset!), so you can just run it yourself if you want to bear the costs...
They did something similar with LLM use, where for simplicity they gave you LLM use, but you could use them directly too.
rafaelmn
>Remote editing does not work on Windows (its not implemented at all), so if you are on windows, you cannot ssh into anything with the editor remote editing feature. This means you cannot use your PC as a thin client to the actual chunky big work machine like you can with vscode.
Does this work on anything other than VSCode ? I have been trying to use JetBrains stuff for this but it has been bad for years with little improvement. Honestly JetBrains feels like they are falling behind further and further in terms of adapting to providing a modern workflow - bad remote work, bad gen AI integration. I'm using VS code even where I wouldn't consider it before because of this, and I would like to see what the alternatives have to offer because VSCode is not perfect either.
pjmlp
As someone that is old enough to have used UNIX development servers for the whole company, with PC thin clients, reading about remote development as modern workflow is kind of hilarious.
scottlamb
The remote development feature implemented in VS Code—and I believe also in beta in Zed—is a million times better than what you're used to. The UI is local, the storage and computation (including the language server) are remote. This takes away the lag when connected to a far-away server while still allowing things like platform-dependent compilation to work correctly and efficiently.
rafaelmn
Except doing it over LAN vs Internet is a very different thing - editing over SSH with >100 ping is annoying as hell, especially if you have packet drops (like a mobile connection). Using a thin client editor with remote server is a much smoother experience.
gamedever
sounds like an old stubber person comment. "we had fax in the 70s, why do we need anything more, now get off my lawn!"
vscode's remote services is far beyond your old remote experience, an experience I share
MobiusHorizons
It’s not exactly the same paradigm as remote editing, but neovim in tmux accessed over mosh is my preferred way of accomplishing the same task. I have also gotten a neovim gui to connect with a neovim instance over ssh, which worked pretty well until the ssh connection broke. But I prefer my editor in a terminal rather than terminals in my editor, so I switched back to my tmux based workflow.
jwiz
If you are already using tmux, what is the value of using mosh? Sincere question.
lionkor
Works on JetBrains, vscode, any terminal editor (neovim, vim, nano, etc.). Is it any good? It's fine on JetBrains, great on vscode, the rest is more or less great. Zed does not have it. You want to edit a remote file? You download it, edit it, upload it. That's much worse than a half baked implementation.
spmurrayzzz
For clarity, this does work with the mac os version of Zed. I use it frequently to work on my GPU nodes. The one tradeoff that is a bit of a smell for me is that the preview version of the feature requires you talk to a centralized broker on Zed's servers rather than fully p2p between your local IDE and your own server.
This is supposedly temporary though IIRC (may even be changed already in a dev branch, not sure).
vasco
With the speed that models are coming out at and the amount of VC subsidies in trials my approach is the opposite, I don't get too attached and keep trying different tools and models.
winternewt
And that's why most of these endeavors are doomed to fail. Every time they enshittify one service there's a new one with attractive UX that VC's essentially pay you to use instead. And so the previous investment would be lost if they couldn't dump it on naive stock traders by going public before the cat is out of the bag.
kristofferR
It's like the recent Samsung phones, functionality fees are waived until 2026. No word about the price yet. [1]
Samsung should be avoided like the plague anyway, I've never seen such a malicious and hostile company! On Dec 13 they silently announced that they were gonna break Samsung APIs on Dec 30 [2]. Yeah, they gave devs the "You gotta spend your holidays fixing our mess, otherwise your app will break". Due to that Samsung is still broken in Home Assistant and other API integrations. [3]
[1] https://youtu.be/a4NJNdHqs_I?t=418
[2] https://community.smartthings.com/t/changes-to-personal-acce...
KPGv2
> Samsung should be avoided like the plague anyway
I've avoided them for years. I had a Samsung phone a long time ago, and I'd rooted it to run one of those apps that could automate tasks (Tasker?), with a killer feature being when I turn my phone upside down, it goes on silent mode. Standard now, but back then wasn't possible on Android, and Tasker enabled it. And also some geofencing stuff. If I got a text while going faster than 10mph it would respond back "driving right now, will respond later."
Anyway, Samsung released an upgrade that I'd heard would eradicate root and make it impossible going forward. Something to do with Knox, a corporate way of locking down phones for employees.
I repeatedly declined to upgrade.
Finally, one night, with my phone in another room, it force-installed the update, with Knox, on my phone, wiping out my root, making it impossible going forward, and making Tasker worthless for me.
I've never given Samsung another cent. No company that will disobey me re my own property and will remotely hack my device and wipe out my content can be trusted, and that's essentially what they did.
For similar reasons, I've never given Sony any money since the rootkit scandal. 2025 marks twenty years of no Sony. I've probably unknowingly seen a few Sony films, but that's it. No electronics, no games, etc.
Larrikin
As an Android user and developer, it's always annoying reading reviews where Samsung gets 9.5 out of ten and they give a terrible review to the competition because they have a slightly better camera. They give you a terrible Touch Wiz UI/whatever crap interface they switched to, change all the fonts, move around menu items and buttons, slightly change all the stock apps to be worse, push their garbage store, etc. Samsung Galaxy 1 was legitimately a good phone, but all the modern reviews just feel like author grew up with all the garbage that Samsung brings and thinks that is actually a good experience
ewoodrich
I absolutely prefer modern OneUI on Samsung phones to the Pixel variant or stock AOSP. The Galaxy store is only used for updating Samsung native apps and isn't "pushed" at all in my experience. I don't use their native apps for the most part and them existing isn't a problem for me, Google apps are preinstalled and work as expected set once as the default.
JoshTriplett
This is exactly my concern with Samsung's upcoming trifold phone: I'm excited about the idea of a trifold phone, but I definitely don't want to use a phone that has anything other than the stock Android experience.
diodak
Hey, my name is Piotr and I work on language servers at Zed.
Right now you can run multiple language servers in a single project. Admittedly you cannot have multiple instances of a single language server in a single worktree (e.g. two rust-analyzers) - I am working on that right now, as this is a common pain point for users with monorepos.
I would love to hear more about the problems you are having with running language servers in your projects. Is there any chance for us to speak on our community Discord or via onboarding call (which you can book via https://dub.sh/zed-c-onboarding)?
rbetts
I've been using Zed (with python) for the last few weeks (coming from vscode and nevim). There's a lot I like about Zed. My favorites include the speed and navigation via the symbol outline (and vim mode). I'd have a hard time going back to vscode. The LSP configuration, though, is not one of its best parts, for me. I ended up copy/pasting a few different ruff + pyright configs until one mostly worked and puzzled through how to map the settings from linked pyright docs into Zed yaml. Some better documentation for the configuration stanzas and how they map across the different tool's settings would be really helpful.
I still, for example, can't get Zed / LSP to provide auto-fix suggestions for missing imports. (Which seems like a common stumbling block: https://github.com/zed-industries/zed/discussions/13522, https://github.com/zed-extensions/java/issues/20, https://github.com/zed-industries/zed/discussions/13281)
I'm sure given the breadth of LSPs, that they all have their own config, and use different project config files, makes it hard to document clearly. But it's an area that I hope bubbles up the roadmap in due course.
Nuzzerino
I'm curious if you've given thought to improving json-schema support. Zed just packages VSCode's implementation (https://github.com/zed-industries/json-language-server ), which is generally decent, but hasn't been able to keep up with the spec, and I doubt they ever will at this point (Example: https://github.com/microsoft/vscode/issues/165219).
The newer specs for json-schema (not supported by VSCode) allow for a wider range of data requirements to be supported by a schema without that schema resorting to costly workarounds. VSCode's level of support for this is decent, but is still a pain point as it creates a sort of artificial restriction on the layout of your data that you're able to have without unexpected development costs. This of course can lead to missed estimates and reduced morale.
I understand that very few developers are directly producing and maintaining schemas. Those schemas do have an impact on most developers though. I think this is a problem that is being sadly overlooked, and I hope you can consider changing the status quo.
Love the company name btw, sounds similar to my own Nuzz Industries (not a real company, just a tag I've slapped onto some projects occasionally as a homage to Page Industries from Deus Ex).
lionkor
Hi, thank you. I specifically meant running multiple LSPs in the same file at the same time, akin to vscode.
mihaaly
The sensitive readers please be advised, quite a bit of a rant and angry reactions coming in an overreacting style, please stop here if you are of the sensitive type. The comments are unrelated to this particular product but aimed at the universal approach of the broad topic nowadays. Zero intent of offending anyone specific person is attempted.
I am fed up with all these predicting what I want to do. Badly!! Don't guess! Wait, and I will do what I want to do. I do not appreciate it from my wife trying to figure out what I want to say in the middle of my senntence and interrupts before I finish what I am saying, imagine how much I tolerate it from a f computer! I know what I am going to do, you do not! Let me do that already! This level of predicting our asses off everywhere grown to be a f nuisance by now, I cannot simply do and focus on what I want to do because of the many distractions and suggestions and guesses and prediction of me and my actions all the f time are in the way' Wait, and see! At this overly eager level and pushing into everything is a nuisance now! Too many times the acceptance of the - wrong - 'helping suggestion' is in the way too, hijacking that usable elsewhere particular keyboard action, breaking my flow, dragging in the unwanted stupid guess! Recovery of my way of working from incoming and pushy "feature" hiding/colliding my usual actions, forced on me in a "security update" or other bullshit, turning off and recover the working practice already been in place and worked is an unwelcom being in the way too, ruined, now colliding with "smart prediction", not helping. In long term, it is not a definitive help but an around zero sum game. Locally, in specific sittuations, too many times it is a strong negative by the wrong done! Too much problems here and there, accuracy and implementation wise. Forced everywhere. Don't be a smartass, you are just an algorithm not a mind reader! Lay back and listen.
If prediction is that smart - being with us since the turn of the millania here and there - then should do my job perfectly and I can go walk outside and collect the money! Until, f off!
lionkor
I find that AI autocomplete, even autocompleting full functions, is capable enough to use. I need to review all my code in detail before pushing, and I need to write unit tests, and I need to "run it once" test it also.
It gets it mostly right most of the time, and often times it quite literally suggests what I was about to type.
This is mostly in Rust and C#, maybe other languages have more of a hurdle for AI.
powerhugs
So it can rust now? That's impressive!
Last time I tried it had no way of valid rust code beyond hello-world level, constantly producing code that failed the borrow checker.
lionkor
I rarely if ever have to worry about the borrow checker, mostly stumbling blocks are move/copy/clone semantics.
GitHub Copilot does a good job of generating correct Rust, it just has the usual subtle-but-annihilating-if-not-caught logic bugs, like in all other languages.
linsomniac
The wife finishing your sentences is an interesting analogy... My wife and I are usually on the same page about things, so for many topics we can use short-hand or otherwise cut discussions short. It's like in the movie _Hackers_: "It's in the place I put that thing that time." We can say just enough between us that we verify we have a shared state, and if we aren't sure we can verify and adjust.
With an LLM, if what I'm starting to say gives it a direction on where I'm going, I'd like to see what it thinks, so if it's largely or entirely right I can just continue on.
For example, I just asked ChatGPT o3-mini to complete the code "def download_uri_to_file(", and it came up with the entire function including type annotations, a very reasonable docstring, error handling, and streaming download. In fact, reviewing the code I'm sure it's better than I would have written the first pass through (I probably wouldn't have done the error handling or the streaming (unless I knew up front that I was going to be downloading huge files).
aidenn0
> The wife finishing your sentences is an interesting analogy... My wife and I are usually on the same page about things, so for many topics we can use short-hand or otherwise cut discussions short.
My wife and I are just too different for this to happen. For the first 10 years or so we had the opposite happen a lot (multiple times a day for the first few years), where we thought we were on the same page, but had actually under-communicated. It still happens occasionally, but now we mostly overcommunicate about anything of any importance.
Our kids learned pretty quickly that if one parent was helping them with their homework, but had to leave to do something else, that asking the other parent for help was going to confuse them more, since we come at any given problem from a completely different direction.
linsomniac
Sure, it's not a given that a conversation with a spouse will be at a "we can complete each-others sentences" level every time, or even most of the time. My wife and I are pretty lucky in that regard.
And to bring it full circle, the LLMs aren't going to know where you're going in all cases. I've had REALLY poor luck getting LLMs (admittedly, a year ago) to help me with Python Textual UIs. I don't understand Textual enough, and they both don't seem to understand it well enough, and I think they also have a better understanding of archaic use of Textual (a fast moving target I get the impression).
My wife and I literally just had a conversation of shared knowledge: "I want to get back into the habit of doing more exercise. Those things like Pikmin and that other thing." "Yeah, I know what you mean." "You know what I'm talking about?" "Yeah, but I can't remember the name." "You know the one?" "Yeah, the insurance exercise incentive one, something-go or something." "Yeah, that's the one."
tomw1808
Same here, I basically turned off all the auto-complete things everywhere in all the tools I am using, can't stand it. And just before reading your comment, I had a google doc I edited in the other tab and thought, how annoying are these auto suggestions actually. Not helping at all, instead a distraction (to me).
For AI coding I'm using Aider as a docker container in the terminal in the IDE and I love it. I can write what I want how I feel the prompt to be necessary and then (and only then) it makes the changes or runs whatever I requested. The IDE runs uninterrupted and without any "smart suggestions". A tool for every job. Sometimes I do a lot in aider, sometimes I don't open it at all, but its all separated where what happens when.
But yeah, anyways, while not as strongly feeling as you (probably) towards auto suggesting mid way through my sentence, I at least feel they are more distraction than help to me.
beefnugs
No one has even tried to do it properly: It would have to be constant, highly parallel (locally running:non pay-per-use) simulations going on the background, with feedback from new constantly changing user input and some kind of new reward detection about it converging on something worth suggesting.
These loops and simulations have to be happening at multiple levels of abstraction all at the same time, not even sure how that would work or coordinate properly, and thus: never gonna happen
deagle50
Same. I also configured my editor not to show LSP diag unless I save. Something you can't do in Zed.
Falimonda
Go hug your wife
mihaaly
You found the core of the topic, locked on the main meaning instantly and with no error, congratulations for this increadibly sharp insight! :/
dmix
Cursor's predictions work for me the vast majority of the time (far more so than Copilot+VSCode). Might be a language/framework dependent though.
gkbrk
Is someone forcing you at gunpoint to use AI autocomplete while you code? If you think it's not good, just don't use it.
heeton
Right? If you don't like the tool, turn it off.
I find autocomplete _exceptionally_ useful. It's right in most of the simple tasks I'm trying to do and speeds me up a lot.
botanical76
Well, I notice there is a lot of pressure in organizations for individual developers to start making use of these tools. I was already using AI extensively before my company picked up on it, so it doesn't really affect me negatively, but I notice some of my coworkers starting to ask questions like "Do I have to use it?". The status quo seems to imply that you {refuse to accept change,aren't willing to grow,aren't interested in increasing efficiency in workflow} if you don't use AI tools / autocomplete.
So while it is unlikely anyone is _forcing_ you to use AI-enabled efficiency boosters, there may be a strong managerial pressure felt to do so, and it may even be offered as an action item in yearly reviews, and therefore strongly linked to compensation / incentives.
That is all to say, I understand if people in this group are frustrated with the AI hype train at the moment, even if they can appreciate that these tools do indeed improve efficiency in some places and in some people.
pritambarhate
If an employee demonstrates the same level of productivity without using AI, most managers would likely be fine with that approach. However, if a manager observes that several team members are more productive with AI and are achieving business goals more quickly, they will naturally expect everyone to adopt it. Those who refuse to use AI and cannot match the efficiency of their peers may eventually be replaced. While this outcome may be emotionally challenging, economic realities primarily drive these decisions.
mihaaly
[flagged]
undefined
qaq
Yep it should be configurable let me type at the very least the function name before you start predicting
yellow_lead
Seems like you can't run it locally. I don't like my code being sent to a third party, especially when my employer may not agree with it.
I also edit secret/env files in my IDE, so for instance, a private key or API key could get sent, right?
I hope there will be a local option later.
_flux
They use backend configurable at environment variable ZED_PREDICT_EDITS_URL https://github.com/zed-industries/zed/blob/2f734cbd5e2452647... , but I don't know if the /predict_edits/v2 endpoint is something some projects provide or not.
At least the model is available and interacting with it seems simple, so it's probably quite realistic to have an open/locally runnable version of it. The model isn't very big.
levzzz
yeah, i'd like to be able to run it locally. it should fit well onto my 12gb gpu
mbitsnbites
The model is based on Qwen2.5-Coder-7b it seems. I currently run some quantized variant of Qwen2.5-Coder-7b locally with llama.cpp and it fits nicely in the 8GB VRAM of my Radeon 7600 (with excellent performance BTW), so it looks like it should be perfectly possible.
I would also only use Zeta locally.
mikaylamaki
> a private key or API key could get sent, right?
You can disable this feature on a per-file basis, here’s the relevant setting: https://github.com/zed-industries/zed/blob/39c9b1f170cd640cd...
yencabulator
Sending files to a remote server is never something that should need to be disabled, this must be an opt-in or it's time for a fork.
85392_school
It is opt in. You have to manually sign in to Zed and enable the feature.
mikaylamaki
Agreed. The predict edit feature needs to be actively enabled before it'll do anything. And once it's enabled, it won't send up your private keys or environment variables. If their filename matches a glob in this list, or a list you configure.
thomascountz
If you were looking for the configuration like I was[1][2]:
{
"show_edit_predictions": <true|false>,
"edit_predictions": {
"disabled_globs": [<globs>],
"mode": <"eager_preview"|"auto">
},
"features": {
"edit_prediction_provider": <"copilot"|"supermaven"|"zed"|"none">
}
}
[1]: https://zed.dev/docs/completionstripplyons
Thanks for sharing, it would have taken me some time to find this. It should really be included in the article.
mikebelanger
I've been using Zed for a few months now. One thing I really like about Zed is its relatively discrete advertising of new features, like this edit prediction one. Its just a banner shown in the upper-left, and it doesn't block me from doing other stuff, or force me to click "Got it" before using the application more.
This definitely counters the trend of putting speech balloons/modals/other nonsense that force a user to confirm a new feature. Good job, Zed team!
barrell
I read this wrong initially — I thought you said one thing you __dislike__ about Zed.
I read the whole thing thinking, __oh my god they do exist__
boxed
I tried CoPilot a while and my biggest gripe was tab for accepting the suggestion. I very often got a ton of AI garbage when I was just trying to indent some code.
Tab just doesn't seem like the proper interface for something like this.
ljm
I actually wish more editors had emacs style indenting where hitting tab anywhere on the line would re-indent it or otherwise cycle through indent levels if it was unclear, especially because you’re unlikely to get copilot suggestions in the middle of a word. Plus, it doesn’t break if there’s a syntax error elsewhere in the file.
fhd2
It's one of those weird Emacs things that you get _so_ used to that everything else seems to waste your time.
Using code formatters and formatting on save or with a shortcut is OK, but not really the same to me.
That's probably why I'm stuck with Emacs:
1. No need to use the mouse.
2. Extremely efficient keyboard usage (maybe not the most efficient, but compared with common IDEs, certainly).
Makes it feel like I'm actually using a brain computer interface. The somewhat regular yak shaving is a bit of a bummer though. I like that I can modify everything to be exactly how I like it, but I wouldn't mind sensible defaults. Haven't found a distribution yet that works well without tinkering.
ljm
Exactly, and sometimes I don't want to save a file just for indentation to happen as a side-effect, especially if I can do `C-x H TAB` to correctly re-indent the entire thing.
That's particularly more helpful when some formatters will actually rewrite your code to either break lines up or squish things back into a one-liner.
boxed
Oh, didn't know about that. That makes a lot of sense. Reminds me of how hitting cmd+c on a line in PyCharm copies the entire line if there is no selection. Because what else would make sense?
yencabulator
I thought that was a great feature. Now I'm writing mostly languages with well-defined formatting rules and simply never need tab. This is even better.
as-cii
Hey! Zed founder here.
We totally agree with this and that's why Zed will switch the keybinding for accepting an edit prediction to `alt-tab` when the cursor is in the leading whitespace of a line. This way you can keep using `tab` for indenting in that situation.
Also, when there's both an edit prediction and and LSP completion, Zed switches the keybinding to `alt-tab` to prevent the conflict with accepting an LSP completion.
Curious to hear what you think!
danielsamuels
For reasons that should be obvious, that's not going to work on Windows.
as-cii
Sorry, I assumed macOS: but you're right! For Linux (and Windows, once we ship support for it) the keybinding is alt-l to avoid conflicting with tab switching.
undefined
awfulneutral
Ohhh, is that why I keep pressing tab and it doesn't accept the prediction lately? I thought it was a bug. It feels weird for tab to double-indent when it could be accepting a prediction - I wonder if alt-tab to do a manual indent rather than accept the current prediction might be preferable?
Edit - On the other hand, a related issue is that if the prediction itself starts with whitespace, in that case it would be good if tab just indents like normal; otherwise you can't indent without accepting the prediction.
VWWHFSfQ
Is there way to change this key binding (tab for accept) right now? Because otherwise I have to stop using this program. It is absolutely obnoxious.
janaagaard
After using Prettier to format my code and turning on format-on-save, I pretty much don’t use the tab key anymore. This doesn’t invalidate your point, - I am merely guessing as to why the tab key seemingly has been reassigned.
daliusd
Yes, copilot's tab in vim is that made me think that AI is useless. However next iteration of AI coding tools made me rethink this (I am using https://github.com/olimorris/codecompanion.nvim with nvim now).
windward
AI coding tool implementers seem to be fans of novel editor fonts.
madmulita
Don't give them ideas! I can already see the useless AI key next to Fn in my next keyboard.
card_zero
That already happened, just over a year ago, we have Copilot keys now.
I guess it invokes the AI rather than controling it, maybe there'll be another key soon.
yencabulator
See, you're thinking of Microsoft Copilot, but code completion is provided by Github Copilot, so it'll need its own key. Which will also be labeled Copilot.
MaikuMori
Doesn't exactly fix the issue, bout you can cancel the suggestion with ESC and then press Tab.
Changing the shortcut should be possible, but I haven't tried.
boxed
It's a timing issue too. Your hand can be travelling to the keyboard and between that and registering in the OS the AI suggestion inserted itself inbetween.
marcosdumay
It's way better than the other Microsoft favorites of space and enter...
It's as if people developing autocomplete doesn't really code.
moritzruth
Ctrl+Return works quite well for me in IntelliJ.
dgacmu
As a slight tangent, this prompted me to wonder about one of the things I _haven't_ enjoyed in my last two weeks of experimenting with zed: It tries to autocomplete comments for me. Hands off - that's where I think!
Fortunately, zed somewhat recently added options to disable these:
"edit_predictions_disabled_in": ["comment"],
"inline_completions_disabled_in": ["comment"]
My life with zed just got a little better. If I switch back to vscode I'll have to figure out the same setting there. :-)fredoliveira
FYI, it looks like inline_completions_disabled_in is no longer a thing :-)
dgacmu
Ahh, I see - it looks like in the newer version it is being replaced by just edit_predictions_disabled_in.
Thanks!
dakiol
Am I the only one who prefers stability instead of a constant rush of features in their text editors/IDEs? If it’s AI related I like them even less. I know I can stick forever with Vim, but damn, I tried Zed and it felt good.
Arch485
Zed is amazing, and I definitely recommend it. That said, I will not be using their AI features, and if the editor turns into a slow, bloated monster because of them (like Visual Studio and anything made by JetBrains) I will have to ditch it.
awfulneutral
This just seems to be the way for code editors. We just have to switch every few years to the next one.
jswny
I agree. I actually use the AI features in Zed a lot, but there are things I really wish they would prioritize.
For example this issue that’s been open for about a year: https://github.com/zed-industries/zed/issues/10122
Editing large files is an incredibly common use case for an editor
anon7000
The problem is that practically every company which can spend money is asking developers to explore AI development tools. It’s becoming a bare-minimum feature for many, which is why Cursor has exploded in popularity.
The other stuff is awesome and important (debugger when!?), but AI is becoming table stakes for even participating in the current tech economy. It’s all execs are talking about. So it’s not surprising they’ve prioritized this work
elashri
It seem that someone already published different quanta versions of the model [1] . This can be used to define Modelfile to use with ollama locally. But I am not sure that zed allows changing the endpoint of this feature yet (ever). Of course it is opeb source and you can change it but then you will need to build it.
as-cii
Hey elashri, Zed co-founder here.
There currently is no official way of configuring Zed to use Ollama for edit prediction, but I would love to accept a pull request that implements it!
It should be relatively straightforward and we're happy to accept contributions here: this has been something I wanted to experiment with for a while but didn't get around to for the launch.
_flux
I found https://github.com/zed-industries/zed/blob/2f734cbd5e2452647... which leads me to believe the environment variable ZED_PREDICT_EDITS_URL does control the endpoint.
elashri
But is there a setting you can modify after the binary being built? Something like in application settings? Or do you need to build it?
master-lincoln
It's an environment variable you can set before starting the program. No need to recompile.
markus_zhang
I think the modern Intellisense has the right amount of prediction - offloads enough brain activity without completely relying on something else.
AI prediction feels way too much and way too eager to give me something. I don't know about you guys, but programming is an exercise for me, not just to make it work and call it a day.
However, AI would be useful if it can offer program structural and pattern recommendations. One big problem I now face, and I believe all hobbyists face too, is that when the program grows larger, it is becoming increasingly difficult to make it well structured and easy to expand -- on the other hand, pre-mature architecturing is also an issue. Reading other people's source is not particularly useful, because 1) You don't know whether it is suitable or even well written, and 2) Usually it is too tough to read other people's source code.
bennine
> but programming is an exercise for me, not just to make it work and call it a day.
The problem is that mid, upper management and execs don't much care for how we feel about it.
They are literally measuring who is using AI and how much and will eventually make it into an excuse for poor performance.
markus_zhang
Yeah I don't really mind using AI coding in work because it's boring as hell. And getting things done quicker is almost a virtue in the business world.
I should have clarified that my original comment is about side projects or serious software engineering.
coder543
Two immediate issues that I noticed:
1. If I make a change, then undo, so that change was never made, it still seems to be in the edit history passed to the model, so the model is interested in predicting that change again. This felt too aggressive... maybe the very last edit should be forgotten if it is immediately undone. Maybe only edits that exist against the git diff should be kept... but perhaps that is too limiting.
2. It doesn't seem like the model is getting enough context. The editor would ideally be supplying the model with type hints for the variables in the current context, and based on those type hints being put into the context, it would also pull in some type definitions. (I was testing this on a Go project.) As it is, the model was clearly doing the best it could with the information available, but it needed to be given more information. Related, I wonder if the prediction could be performed in a loop. When the model suggests some code, the editor could "apply" that change so that the language server can see it, and if the language server finds an error in the prediction, the model could be given the error and asked to make another prediction.
keyle
Good, charge for Zed and secure its future.
I'm becoming more and more wanting to use Zed every day, and shifting away from other editors whenever possible. Some LSP implementations are lacking... But it's getting damn close!
I love the new release every week. Zed is my recent love, and Ghostty which is also stellar.
Hanging by a thread for some sort of lldb/gdb integration with breakpoints and inspection! Hopefully some day, without becoming a bag of turd.
Get the top HN stories in your inbox every day.
> Edit prediction won't be free forever, but right now we're just excited to share and learn.
I love Zed, and I'm happy to pay for AI stuff, but I won't be using this until they are done with their rug pull. Once I know how much it costs, I can decide if I want to try integrating it into my workflow. Only THEN will I want to try it, and would be interested in a limited free trial, even just 24 hours.
Considering I've seen products like this range from free to hundreds of dollars per month, I'd rather not find out how good it is and then find out I can't afford it.
Other than that for anyone wanting to try Zed:
- You can only run one LSP per file type, so your Rust will work fine, your C++, too, your Angular will not.
- Remote editing does not work on Windows (its not implemented at all), so if you are on windows, you cannot ssh into anything with the editor remote editing feature. This means you cannot use your PC as a thin client to the actual chunky big work machine like you can with vscode. I've seen a PR that adds windows ssh support, but it looked very stale.