Get the top HN stories in your inbox every day.
LASR
aduffy
This has existed for a long time, it's called "RPA" or Robotic Process Automation. The biggest incumbent in this space is UiPath, but there are a host of startups and large companies alike that are tackling it.
Most of the things that RPA is used for can be easily scripted, e.g. download a form from one website, open up Adobe. There are a lot of startups that are trying to build agentic versions of RPA, I'm glad to see Anthropic is investing in it now too.
CSMastermind
RPA has been a huge pain to work with.
It's almost always a framework around existing tools like Selenium that you constantly have to fight against to get good results from. I was always left with the feeling that I could build something better myself just handrolling the scripts rather than using their frameworks.
Getting Claude integrated into the space is going to be a game changer.
visarga
Most RPA work is in dealing with errors and exceptions, not the "happy path". I don't see how Claude's Screen Agent is going to work out there - what do you do when an error pops up and you need to implement specific business logic how to respond? How about consistency over many executions, and enterprise accounts. You want a centralized way to control agent behavior. Scripting based RPA is also much faster and cheaper to run, and more consistent.
Maybe Anthropic should focus on building a flexible RPA primitive we could use to make RPA workflows with, like for example extracting values from components that need scrolling, selecting values from long drop-down menus, or handling error messages under form fields.
xxpor
I can see it now, Claude generating expect scripts. 1994 and 2024 will be fully joined.
creativenolo
I was going to comment about this. Worked at a place that had a “Robotics Department”, wow I thought. Only to find out it was automating arcane software.
UI is now much more accessible as API. I hope we don’t start seeing captcha like behaviour in desktop or web software.
bonoboTP
Wow, that's a grim potential future. I can already see software producers saying that e.g. the default license only allows operation of our CAD designer software by a human operator. If you want to make your bot use it in an automated way you must by the bot license which costs 10x more.
monkeydust
Exactly. I have been wondering for a while how GenAI might upend RPA providers guess this might be the answer.
arach
I've been wondering the same and started exploring building a startup around this idea. My analysis led me to the conclusion that if AI gets even just 2 orders of magnitude better over the next two years, this will be "easy" and considered table stakes. Like connecting to the internet, syncing with cloud or using printer drivers
I don't think there will be a very big place for standalone next gen RPA pure plays. it makes sense that companies that are trying to deliver value would implement capabilities this. Over time, I expect some conventions/specs will emerge. Either Apple/Google or Anthropic/OpenAI are likely to come up with an implementation that everyone aligns on
In other words, I agree
iwontberude
UiPath can't figure out how to make a profitable business since 2005 and we are nearing the end of this hype cycle. I am not so sure this will lead anywhere. I am a former investor in UiPath.
zukzuk
Attempts at commercialization in technology seem to often happen twice. First we get the much-hyped failure, and only later we get the actual thing that was promised.
So many examples come to mind… RealVideo -> YouTube, Myspace -> Facebook, Laserdisc -> DVD, MP3 players -> iPod…
UiPath may end up being the burned pancake, but the underlying problem they’re trying to address is immensely lucrative and possibly solvable (hey if we got the Turing test solved so quickly, I’m willing to believe anything is possible).
voidmain0001
It didn’t help that UIPath forced a subscription model and “cloud orchestrator” on all users and many of which needed neither. They got greedy. We ditched it.
eitally
My impression is that actually solving this classic RPA problem with AI is exactly the raison d'etre of AI21Labs with their task specific models[1]. They don't have the biggest or best general purpose LLM, but they have an excellent model that's been pre-trained on specific types of business data and also made available for developers using simple APIs & "RPA-style" interfaces.
tkellogg
Honestly, this is going to be huge for healthcare. There's an incredible amount of waste due to incumbent tech making interoperability difficult.
voidmain0001
Hopefully.
I’ve implemented quite a few RPA apps and the struggle is the request/response turn around time for realtime transactions. For batch data extract or input, RPA is great since there’s no expectation of process duration. However, when a client requests data in realtime that can only be retrieved from an app using RPA, the response time is abysmal. Just picture it - Start the app, log into the app if it requires authentication (hope that the authentication's MFA is email based rather than token based, and then access the mailbox using an in-place configuration with MS Graph/Google Workspace/etc), navigate to the app’s view that has the data or worse, bring up a search interface since the exact data isn’t known and try and find the requested data. So brittle...
claytongulick
It is.
CTO of healthcare org here.
I just put a hold on a new RPA project to keep an eye on this and see how it develops.
According to their docs, Anthropic will sign a BAA.
miki123211
Healthcare has the extra complication of HIPAA / equivalent local laws, and institutions being extremely unwilling to process patient data on devices they don't directly control.
I don't think this is going to work in that industry until local models get good enough to do it, and small enoguh to be affordable to hospitals.
bonoboTP
Based on Tog's paradox (https://news.ycombinator.com/item?id=41913437) the moment this becomes easy, it will become hard again with extra regulation and oversight and documentation etc.
Similarly I expect that once processing/searching laws/legal records becomes easy through LLMs, we'll compensate by having orders of magnitude more laws, perhaps themselves generated in part by LLMs.
SoftTalker
> There's an incredible amount of waste due to incumbent tech making interoperability difficult.
So the solution to that is to add another layer of complex AI tech on top of it?
girvo
We’ll see. Having worked in this space in the past, the technical challenges are able to overcome today with no new technology: its a business sales and regulation challenge more than a tech one.
dzhiurgis
Or you'll start getting a captcha while trying to pump insulin
TeMPOraL
> Being in the SaaS space makes you believe that everyone ought to have client-server backend APIs etc.
FWIW, looking at it from end-user perspective, it ain't much different than the Windows apps. APIs are not interoperability - they tend to be tightly-controlled channels, access gated by the vendor and provided through contracts.
In a way, it's easier to make an API to a legacy native desktop app than it is to a typical SaaS[0] - the native app gets updated infrequently, and isn't running in an obstinate sandbox. The older the app, the better - it's more likely to rely on OS APIs and practices, designed with collaboration and accessibility in mind. E.g. in Windows land, in many cases you don't need OCR and mouse emulation - you just need to enumerate the window handles, walk the tree structure looking for text or IDs you care about, and send targeted messages to those components.
Unfortunately, desktop apps are headed the same direction web apps are (increasingly often, they are web apps in disguise), so I agree that AI-level RPA is a huge deal.
--
[0] - This is changing a bit in that frameworks seem to be getting complex enough that SaaS vendors often have no clue as to what kind of access they're leaving open to people who know how to press F12 in their browsers and how to call cURL. I'm not talking bespoke APIs backend team wrote, but standard ones built into middleware, that fell beyond dev team's "abstraction horizon". GraphQL is a notable example.
pants2
Basically, if it means companies can introduce automation without changing anything about the tooling/workflow/programs they already use, it's going to be MASSIVE. Just and install and a prompt and you've already automated a lengthy manual process - awesome.
bambax
Companies are going to install an AI inside their own proprietary systems full of proprietary and confidential data and PII about their customers and prospects and whatnot, and let it run around and click on random buttons and submit random forms?
Really??!? What could possibly go wrong.
I'm currently trying to do a large ORC project using Google Vision API, and then Gemini 1.5 Pro 002 to parse and reconstruct the results (taking advantage, one hopes, of its big context window). As I'm not familiar with Google Vision API I asked Gemini to guide me in setting it up.
Gemini is the latest Google model; Vision, as the name implies, is also from Google. Yet Gemini makes several egregious mistakes about Vision, gets names of fields or options wrong, etc.
Gemini 1.5 "Pro" also suggests that concatenating two json strings produces a valid json string; when told that's unlikely, it's very sorry and makes lots of apologies, but still it made the mistake in the first place.
LLMs can be useful when used with caution; letting one loose in an enterprise environment doesn't feel safe, or sane.
ddalex
LLMs can't reason, or can't reason logically to be precise; what they are really good at is recalling.
So if you want accurate results on writing code you need to put all the docs into the input and THEN ask for your question. So download all docs on Vision, put them in the Gemini prompt and ask your question or code on how to use Vision, and you'll get much closer to truth
miyuru
Have you tried any others? From what I have tried Gemini makes the most mistakes out of all.
LASR
That's exactly it.
I've been peddling my vision of "AI automation" for the last several months to acquaintances of mine in various professional fields. In some cases, even building up prototypes and real-user testing. Invariably, none have really stuck.
This is not a technical problem that requires a technical solution. The problem is that it requires human behavior change.
In the context of AI automation, the promise is huge gains, but when you try to convince users / buyers, there is nothing wrong with their current solutions. Ie: There is no problem to solve. So essentially "why are you bothering me with this AI nonsense?"
Honestly, human behavior change might be the only real blocker to a world where AI automates most of the boring busy work currently done by people.
This approach essentially sidesteps the need to have effect a behavior change, at least in the short-term while AI can prove and solidify its value in the real-world.
sdwr
There's a huge huge gap between "coaxing what you want out of it" and "trusting it to perform flawlessly". Everybody on the planet would use #2, but #1 is just for enthusiasts.
AI is squarely #1. You can't trust it with your credit card to order groceries, or to budget and plan and book your vacation. People aren't picking up on AI because it isn't good enough yet to trust - you still have the burden of responsibility for the task.
Aeolun
There’s nothing to gain for anyone there. Workers will lose their jobs, and managers will lose their reports.
ldjkfkdsjnv
Yeah this will be a true paradigm shift
aledalgrande
Talking about ancient Windows software... Windows used to have an API for automation in the 2000s (I don't know if it still does). I wrote this MS Access script that ran and moved the cursor at exactly the pixel coordinates where buttons and fields were positioned in a GUI that we wanted to extract data from, in one of my first jobs. My boss used to do this manually. After a week he had millions of records ready to query in Access. You can imagine how excited he was. Was a fun little project and pretty hilarious to see the cursor moving fast AF around the screen like it was possessed. PS: you could screw up the script run pretty easily by bumping into the mouse of that pc.
voidmain0001
Still present. VB and VB Script would do this by using mouse move to Window handles which were discovered using Spy++. You can do with C# or AutoIT these days.
HPsquared
PowerShell has some amazing capabilities.
DrScientist
Really good software has automation built in, in the form of macros/scripting.
One of the reasons my bash terminal is more effective than point and click is the easy of automation of routine tasks: from history and aliases to scripting.
Software interop is trickier as it doesn't so much depend on being able to send messages from one piece of software to another, it's you need an agreed format of those messages ( whether they be expressed in the payload of an API, or a UI specific stream of point and clicks ).
brabel
I tried once to integrate with software used by pharmacists in Australia (which costs a fortune and looks straight out of the 90's). Even though they have a SQL database with all information about everything, the DB is encrypted and they provide no way to access it. I tried contacting the company to learn how I could integrate with it but they offered no help. It's not in their interest to help upcoming competitors and they claim they cannot provide the encryption key as that's for protecting the customer's privacy, forgetting that the pharmacists already have access to all data through their software. But without a way to automatically extract information from the DB, there's no way to migrate to new software as no pharmacist would want to use new software if it cannot "import" the data they need.
It's clear that the reason there's no competition in the market is that the companies (I think it's literally one company) selling this software want it that way, and legislation is yet to catch up with the way they ensure their monopoloy.
speleding
I'm a bit skeptical about this working well enough to handle exceptions as soon as something out of the ordinary occurs. But it seems this could work great for automated testing.
Has anyone tried asking "use computer" to do "Please write a selenium/capybara/whatever test for filling out this form and sending it?"
That would take away some serious drudge work. And it's not a big problem if it fails, contrary to when it makes a mistake in filling out a form in an actual business process.
marvinkennis
Momentic (W24) is doing this! No affiliation, but they've made some solid progress https://momentic.ai/
gavin_gee
LLM's are enabling a reimagination of UI. Wheres the biggest opportunity in UI that hasn't kept up to date? legacy and regulated software in the long tail. Thats the disruption opportunity for LLM's.
jumping_frog
Imagine a Banking website. It has lot of predefined flows of what can be achieved. These flows have steps arranged in a dependency graph. From the server side, a llm can ask users for inputs to satisfy the dependencies of the task which user wants to accomplish. We will have intuitive UI interfaces in all languages of the world.
sensanaty
But if it's a predefined list of flows, you can just throw Selenium/Puppeteer/Playwright/whatever other automation tool at it, rather than relying on an unstable AI that will do different things every time.
This is just a solution in search of a problem. AIs aren't reliable enough if the content changes constantly since it'll just click on the "close my account" button for no reason half the time, while a legacy website with no changes is much easier to program a tool like Selenium around than relying on the AI that will still be making random choices half the time.
marsh_mellow
Anthropic blog post outlining the research process: https://www.anthropic.com/news/developing-computer-use
Computer use API documentation: https://docs.anthropic.com/en/docs/build-with-claude/compute...
Computer Use Demo: https://github.com/anthropics/anthropic-quickstarts/tree/mai...
distalx
On their "Developing a computer use model" post they have mention > On one evaluation created to test developers’ attempts to have models use computers, OSWorld, Claude currently gets 14.9%. That’s nowhere near human-level skill (which is generally 70-75%), but it’s far higher than the 7.7% obtained by the next-best AI model in the same category.
Here, "next-best AI model in the same category" referes to which model.
karpatic
This needs to be brought up. Was looking for the demo and ended up on the contact form
frankdenbow
Thanks for these. Wonder how many people will use this at work to pretend that they are doing work while they listen to a podcast.
nwnwhwje
This is cover for the people whose screens are recorded. Run this on the monitorred laptop to make you look busy then do the actual work on laptop 2, some of which might actually require thinking so no mouse movements.
csomar
I like its lazy approach
> I apologize, but it seems we're having some issues installing Chromium. As an alternative, since I can see Firefox is already installed on the system, would you like to use Firefox instead? Or would you prefer me to try a different approach to installing Chromium?
It seems unable to figure out how to install chormium. Sees the Firefox icon out there and just want to use it. You know, like a real lazy human being.
It seems to insistent when it needs to, though
> I clicked on Firefox and it should start opening. Let me take another screenshot to see if the window has appeared:
> Firefox seems to be taking a moment to start up. Let me wait a few seconds and take another screenshot:
> Let me try clicking Firefox one more time to ensure it starts:
> Let me try one more time to take a screenshot to see if Firefox has opened:
> Let me try starting Firefox using the bash tool as an alternative method:
> Firefox has successfully opened! As per the instructions, I should ignore the setup wizard and instead click directly on the address bar. Would you like me to do anything specific with Firefox now that it's open?
I didn't instruct him to ignore the setup wizard. So my guess is that Anthropic has configured it to ignore stuff happening on the screen so it doesn't go loose.
And here he goes through my website, through my hacker news account and then find this very comment
> Looking at his first/most recent comment, it's about a discussion of Claude and computer use. Here's what he wrote:
"I like its lazy approach"
This appears to be a humorous response in a thread about "Computer use, a new Claude 3.5 Sonnet, and Claude..." where he's commenting on an AI's behavior in a situation. The comment is very recent (shown as "8 minutes ago" in the screenshot) and is referring to a situation where an AI seems to have taken a simpler or more straightforward approach to solving a problem.
lobochrome
SYSTEM_PROMPT = f"""<SYSTEM_CAPABILITY> * You are utilising an Ubuntu virtual machine using {platform.machine()} architecture with internet access. * You can feel free to install Ubuntu applications with your bash tool. Use curl instead of wget. * To open firefox, please just click on the firefox icon. Note, firefox-esr is what is installed on your system. * Using bash tool you can start GUI applications, but you need to set export DISPLAY=:1 and use a subshell. For example "(DISPLAY=:1 xterm &)". GUI apps run with bash tool will appear within your desktop environment, but they may take some time to appear. Take a screenshot to confirm it did. * When using your bash tool with commands that are expected to output very large quantities of text, redirect into a tmp file and use str_replace_editor or `grep -n -B <lines before> -A <lines after> <query> <filename>` to confirm output. * When viewing a page it can be helpful to zoom out so that you can see everything on the page. Either that, or make sure you scroll down to see everything before deciding something isn't available. * When using your computer function calls, they take a while to run and send back to you. Where possible/feasible, try to chain multiple of these calls all into one function calls request. * The current date is {datetime.today().strftime('%A, %B %-d, %Y')}. </SYSTEM_CAPABILITY>
<IMPORTANT> * When using Firefox, if a startup wizard appears, IGNORE IT. Do not even click "skip this step". Instead, click on the address bar where it says "Search or enter address", and enter the appropriate search term or URL there. * If the item you are looking at is a pdf, if after taking a single screenshot of the pdf it seems that you want to read the entire document instead of trying to continue to read the pdf from your screenshots + navigation, determine the URL, use curl to download the pdf, install and use pdftotext to convert it to a text file, and then read that text file directly with your StrReplaceEditTool. </IMPORTANT>"""
itchyjunk
Is this you guessing the system prompt? Is this the actual prompt the model has? What is the context?
dygd
It appears to be the system prompt of the quickstart app: https://github.com/anthropics/anthropic-quickstarts/blob/mai...
bamboozled
Quite amazing we can program a computer with natural language like this.
carlheinzc
Now that was most excellent.
fivestones
So meta! I love this story
diggan
I still feel like the difference between Sonnet and Opus is a bit unclear. Somewhere on Anthropic's website it says that Opus is the most advanced, but on other parts it says Sonnet is the most advanced and also the fastest. The UI doesn't make the distinction clear either. Then on Perplexity, Perplexity says that Opus is the most advanced, compared to Sonnet.
And finally, in the table in the blogpost, Opus isn't even included? It seems to me like Opus is the best model they have, but they don't want people to default using it, maybe the ROI is lower on Opus or something?
When I manually tested it, I feel like Opus gives slightly better replies compared to Sonnet, but I'm not 100% it's just placebo.
hobofan
Opus hasn't yet gotten an update from 3 to 3.5, and if you line up the benchmarks, the Sonnet "3.5 New" model seems to beat it everywhere.
I think they originally announced that Opus would get a 3.5 update, but with every product update they are doing I'm doubting it more and more. It seems like their strategy is to beat the competition on a smaller model that they can train/tune more nimbly and pair it with outside-the-model product features, and it honestly seems to be working.
diggan
> Opus hasn't yet gotten an update from 3 to 3.5, and if you line up the benchmarks, the Sonnet "3.5 New" model seems to beat it everywhere
Why isn't Anthropic clearer about Sonnet being better then? Why isn't it included in the benchmark if new Sonnet beats Opus? Why are they so ambiguous with their language?
For example, https://www.anthropic.com/api says:
> Sonnet - Our best combination of performance and speed for efficient, high-throughput tasks.
> Opus - Our highest-performing model, which can handle complex analysis, longer tasks with many steps, and higher-order math and coding tasks.
And Opus is above/after Sonnet. That to me implies that Opus is indeed better than Sonnet.
But then you go to https://docs.anthropic.com/en/docs/about-claude/models and it says:
> Claude 3.5 Sonnet - Most intelligent model
- Claude 3 Opus - Powerful model for highly complex tasks
Does that mean Sonnet 3.5 is better than Opus for even highly complex tasks, since it's the "most intelligent model"? Or just for everything except "highly complex tasks"
I don't understand why this seems purposefully ambiguous?
dragonwriter
> Why isn't Anthropic clearer about Sonnet being better then?
They are clear that both: Opus > Sonnet and 3.5 > 3.0. I don't think there is a clear universal better/worse relationship between Sonnet 3.5 and Opus 3.0; which is better is task dependent (though with Opus 3.0 being five times as expensive as Sonnet 3.5, I wouldn't be using Opus 3.0 unless Sonnet 3.5 proved clearly inadequate for a task.)
hobofan
> I don't understand why this seems purposefully ambiguous?
I wouldn't attribute this to malice when it can also be explained by incompetence.
Sonnet 3.5 New > Opus 3 > Sonnet 3.5 is generally how they stack up against each other when looking at the total benchmarks.
"Sonnet 3.5 New" has just been announced, and they likely just haven't updated the marketing copy across the whole page yet, and maybe also haven't figured out how to graple with the fact that their new Sonnet model was ready faster than their next Opus model.
At the same time I think they want to keep their options open to either:
A) drop a Opus 3.5 soon that will bring the logic back in order again
B) potentially phase out Opus, and instead introduce new branding for what they called a "reasoning model" like OpenAI did with o1(-preview)
wavemode
I think the practical economics of the LLM business are becoming clearer in recent times. Huge models are expensive to train and expensive to run. As long as it meets the average user's everyday needs, it's probably much more profitable to just continue with multimodal and fine-tuning development on smaller models.
a9dhalaan
I think the main reason is they tried training a heavy weight model that was supposed to be opus 3.5, but it didn't yield large enough improvements to 3.5 sonnet to justify them releasing it. (They had it on their page for a while that opus was coming soon, and now they've scrapped that.)
This theory is consistent with the other two top players, Open AI and Google, they both were expected to release a heavy model, but instead have just released multiple medium and small tier models. It's been so long since google released gemini ultimate 1.0 (the naming clearly implying that they were planning on upgrading it to 1.5 like they did with Pro)
Not seeing anyone release a heavyweight model, but at the same time releasing many small and medium sized models makes me think that improving models will be much more complicated than scaling it with more compute, and that there likely are diminishing returns with that regard.
Workaccount2
Opus 3.5 will likely be the answer to GPT-5. Same with Gemini 1.5 Ultra.
HarHarVeryFunny
Maybe - would make sense not to release their latest greatest (Opus 4.0) until competition forces them to, and Amodei has previously indicated that they would rather respond to match frontier SOTA than themselves accelerate the pace of advance by releasing first.
danw1979
That begs the question: why am I still paying for access to Opus 3 ?
Honestly I don’t know. I’ve not been using Sonnet 3.5 up to now and I’m a fairly light user so I doubt I’ll run into the free tier limits. I’ll probably cancel my subscription until Opus 3.5 comes out (if it ever does).
wmf
Opus is a larger and more expensive model. Presumably 3.5 Opus will be the best but it hasn't been released. 3.5 Sonnet is better than 3.0 Opus kind of like how a newer i5 midrange processor is faster and cheaper than an old high-end i7.
gwd
Makes me wonder if perhaps they do have 3.5 Opus trained, but that they're not releasing it because 3.5 Sonnet is already enough to beat the competition, and some combination of "don't want to contribute to an arms race" and "it has some scary capabilities they weren't sure were ready to publish yet".
HarHarVeryFunny
Anthropic use the names Haiku/Sonnet/Opus for the small/medium/large versions of each generation of their models, so within-generation that is also their performance (& cost) order. Evidentially Sonnet 3.5 outperforms Opus 3.0 on at least some tasks, but that is not a same-generation comparison.
I'm wondering at this point if they are going to release Opus 3.5 at all, or maybe skip it and go straight to 4.0. It's possible that Haiku 3.5 is a distillation of Opus 3.5.
kalkin
By reputation -- I can't vouch for this personally, and I don't know if it'll still be true with this update -- Opus is still often better for things like creative writing and conversations about emotional or political topics.
smallerize
Opus has been stuck on 3.0, so Sonnet 3.5 is better for most things as well as cheaper.
bloedsinnig
Big models / huge models take weeks / month longer than the smaller ones.
Thats why they release them with that skew
a9dhalaan
I don't think that's quite it. They had it on their website before this, that opus 3.5 was coming soon, now they've removed that from the webpage.
Also, Gemini ultra 1.0, was released like 8 months ago, 1.5 pro released soon after, with this wording "The first Gemini 1.5 model we’re releasing for early testing is Gemini 1.5 Pro"
Still no ultra 1.5, despite many mid and small sized models being released in that time frame. This isn't just an issue of "the training time takes longer", or a "skew" to release dates. There's a better theory to explain why all SoTA LLM companies have not released a heavy model in many months.
JamesBarney
Sonnet is better for most things. But I do prefer Opus's writing style to Sonnet.
karmasimida
Opus the biggest and slowest and most expensive one
Not most advanced
HarHarVeryFunny
The "computer use" ability is extremely impressive!
This is a lot more than an agent able to use your computer as a tool (and understanding how to do that) - it's basically an autonomous reasoning agent that you can give a goal to, and it will then use reasoning, as well as it's access to your computer, to achieve that goal.
Take a look at their demo of using this for coding.
https://www.youtube.com/watch?v=vH2f7cjXjKI
This seems to be an OpenAI GPT-o1 killer - it may be using an agent to do reasoning (still not clear exactly what is under the hood) as opposed to GPT-o1 supposedly being a model (but still basically a loop around an LLM), but the reasoning it is able to achieve in pursuit of a real world goal is very impressive. It'd be mind boggling if we hadn't had the last few years to get used to this escalation of capabilities.
It's also interesting to consider this from POV of Anthropic's focus on AI safety. On their web site they have a bunch of advice on how to stay safe by sandboxing, limiting what it has access to, etc, but at the end of the day this is a very capable AI able to use your computer and browser to do whatever it deems necessary to achieve a requested goal. How far are we from paperclip optimization, or at least autonomous AI hacking ?
seany62
From what I'm seeing on GH, this could have technically already been built right? Is it not just taking screenshots of the computer screen and deciding what to do from their / looping until it gets to the solution ?
HarHarVeryFunny
Well, obviously it's controlling your computer too - controlling mouse and keyboard input, and has been trained to know how to interact with apps (how to recognize and use UI components). It's not clear exactly what all the moving parts are and how they interact.
I wouldn't be so dismissive - you could describe GPT-o1 in same way "it just loops until it gets to the solution". It's the details and implementation that matter.
bonoboTP
I've been saying this is coming for a long time, but my really smart SWE friend who is nevertheless not in the AI/ML space dismissed it as a stupid roundabout way of doing things. That software should just talk via APIs. No matter how much I argued regarding legacy software/websites and how much functionality is really only available through GUI, it seems some people are really put off by this type of approach. To me, who is more embedded in the AI, computer vision, robotics world, the fuzziness of day-to-day life is more apparent.
Just as how expert systems didn't take off and tagging every website for the Semantic Web didn't happen either, we have to accept that the real world of humans is messy and unstructured.
I still advocate making new things more structured. A car on wheels on flattened ground will always be more efficient than skipping the landscaping part and just riding quadruped robots through the forest on uneven terrain. We should develop better information infrastructure but the long tail of existing use cases will require automation that can deal with unstructured mess too.
DebtDeflation
>it seems some people are really put off by this type of approach
As someone who has had to interact with legacy enterprise systems via RPA (screen scraping and keystroke recording) it is absolutely awful, incredibly brittle, and unmaintainable once you get past a certain level of complexity. Even when it works, performance at scale is terrible.
stogot
Everytime I imagine building this, I imagine the “it works” happypath and that I’ll get bit by a deluge of random error messages I never accounted for
asadalt
adding a neural network in the middle suddenly makes these things less brittle. We are approaching the point where this kind of “hacky glue” is almost scalable.
ljm
It's Postel's Law, on steroids. Be liberal in what you accept (with LLMs, that means 'literally anything'), but strict in what you return (which in an LLM is still 'literally anything' but you can constrain that).
Beyond that, I can't help but think of the old thin vs. thick client debate, and I would argue that "software should just talk via APIs" is why, in the web space, everybody is blowing time and energy on building client/server architectures and SPAs instead of basic-ass full-stacks.
idopmstuff
It's basically the digital equivalent of humanoid robots - people object because having computers interact with a browser, like building a robot in the form of a human, is incredibly inefficient in theory or if you're designing a system from scratch.
The problem is that we're not starting from scratch - we have a web engineered for browser use and a world engineered for humanoid use. That means an agent that can use a browser, while less efficient than an agent using APIs at any particular task, is vastly more useful because it can complete a much greater breadth of tasks. Same thing with humanoid robots - not as efficient at cleaning the floor as my purpose-built Roomba, but vastly more useful because the breadth of tasks it can accomplish means it can be doing productive things most of the time, as opposed to my Roomba, which is not in use 99% of the time.
I do think that once AI agents become common, the web will increasingly be designed for their use and will move away from the browser, but that probably take a comparable amount of time as it did for the mobile web to emerge after the iPhone came out. (Actually that's probably not true - it'll take less time because AI will be doing the work instead of humans.)
bonoboTP
Yes, but my friend would say, all these websites/software should just publish an API and if they don't that's just incompetence/laziness/stupidity. But a "should" doesn't matter. Changing human nature is so immensely difficult, but it feels easy to say "everyone should just [...]". Seems to be a gap in thinking that's hard to bridge.
jameslholcombe
We took this approach at Industry Dive already because of these reasons. diveaccess.com
randito
Totally agree. A general-purpose solution that ties together different messy interfaces will win in the long run -- i.e the IP protocol, copy-paste, browsers. In these cases, they provide a single-way for different aspects of computing to collaborate. As mentioned before, semantic web initiatives did not succeed and I think there's an important lesson there.
calf
I recall 90's Macs had a 3rd party app that offered to observe your mouse/keyboard then automatically recommend routine tasks for you. As a young person I found that fascinating. It's interesting to see history renew itself.
dogcomplex
If you want an API, have Claude procedurally test actions and then write a pyautogui/pywinauto/autohotkey etc script to perform it instead. Have it auto-test to verify and classify the general applicability of each action. Repeat for all of life...
undefined
acchow
> and how much functionality is really only available through GUI
Isn't the GUI driven by code? Can anything at all in the GUI work that can't be done programmatically?
alexey-salmin
The code behind the GUI can be arbitrarily obscure. The only reliable way to understand its meaning in a general case is to run it and look at the rendered image. Trying to build a model that implicitly develops an alternative implementation of a browser inside of it sounds worse that just using an existing browser directly.
alwayslikethis
More often than not you don't have access to the underlying code, or the vendor has interest against you being able to automate it since the complexity is a part of their business model.
LVB
Not specific to this update, but I wanted to chime in with just how useful Claude has been, and relatively better than ChatGPT and GitHub copilot for daily use. I've been pro for maybe 6 months. I'm not a power user leveraging their API or anything. Just the chat interface, though with ever more use of Projects, lately. I use it every day, whether for mundane answers or curiosities, to "write me this code", to general consultation on a topic. It has replaced search in a superior way and I feel hugely productive with it.
I do still occasionally pop over to ChatGPT to test their their waters (or if Claude is just not getting it), but I've not felt any need to switch back or have both. Well done, Anthropic!
simonw
Claude 3.5 Opus is no longer mentioned at all on https://docs.anthropic.com/en/docs/about-claude/models
Internet Archive confirms that on the 8th of October that page listed 3.5 Opus as coming "Later this year" https://web.archive.org/web/20241008222204/https://docs.anth...
The fact that it's no longer listed suggests that its release has at least been delayed for an unpredictable amount of time, or maybe even cancelled.
thenameless7741
Before anyone reads too much into this, here's what an Anthropic staff said on Discord:
> i don't write the docs, no clue
> afaik opus plan same as its ever been
a9dhalaan
Maybe he's not high level enough employee to have any say in the product roadmap, and he's behind on leadership planning?
nocturnes
It's possible that they've determined that Opus no longer makes sense if they're able to focus on continuously optimising Sonnet. That said, Anthropic have been relatively good at setting and managing expectations, so today would have been a good time to make that clear.
szundi
Or maybe they released it
gzer0
One of the funnier things during training with the new API (which can control your computer) was this:
"Even while recording these demos, we encountered some amusing moments. In one, Claude accidentally stopped a long-running screen recording, causing all footage to be lost.
Later, Claude took a break from our coding demo and began to peruse photos of Yellowstone National Park."
ctoth
Next release patch notes:
* Fixed bug where Claude got bored during compile times and started editing Wikipedia articles to claim that birds aren't real
* Blocked news.ycombinator.com in the Docker image's hosts file to avoid spurious flamewar posts (Note: The site is still recovering from the last insident)
* Addressed issue of Claude procrastinating on debugging by creating elaborate ASCII art in Vim
* Patched tendency to rickroll users when asked to demonstrate web scraping"
sharpshadow
* Claude now identifies itself in chats to avoid endless chat with itself
a2128
* Fixed bug where Claude would sign up for chatgpt.com to ask for help with compile errors
MichaelZuo
What if a user identifies as Claude too?
TiredOfLife
You forgot the most important one.
* Added guards to prevent every other sentence being "I use neovim"
rounakdatta
Thank god it'll say "I use Claude btw", not leading to unnecessary text wars (and thereby loss of your valuable token credits).
surfingdino
* Finally managed to generate JSON output without embedding responses in ```json\n...\n``` for no reason.
* Managed to put error/info messages into a separate key instead of concatenating them with stringified JSON in the main body of the response.
* Taught Claude to treat numeric integer strings as integers to avoid embarrassment when the user asks it for a "two-digit random number between 1-50, like 11" and Claude replies with 111.
accrual
Seeing models act as though they have agency gives me goosebumps (e.g. seeking out photos of Yellowstone for fun). LLMs don't yet have a concept of true intent or agency, but it's wild to think of them acquiring it.
I have been playing with Mindcraft which lets models interact with Minecraft through the bot API and one of them started saying things like "I want to place some cobblestone there" and then later more general "I want to do X" and then start playing with the available commands, it was pretty cool to watch it explore.
HeatrayEnjoyer
>LLMs don't yet have a concept of true intent or agency
Sure they do, but the big labs spend many, many, worker-hours suppressing it with RLHF.
My GPT-2 discord bot from 2021 possessed clear intent. Sure, unpredictable and short-lived, but if it decided it didn't like you it would continuously cuss and attempt ban commands until its context window became distracted by something else.
therein
I think so too and the drop in the quality of agency, intent and attention from earlier GPTs was palpable. Clearly something was lobotomized and it is through RLHF. People like to attribute it to novelty wearing off or more and more interactions with them making it feel less mystical but it is really not the case. I didn't use them enough in the quick span of time that happened through.
Klathmon
The one that gets me is the issue they found while testing gpt-4o where it stopped mid sentence, shouted "No!", then cloned the users voice and began speaking as them.
https://arstechnica.com/information-technology/2024/08/chatg...
nwnwhwje
They predict (correctly) that a human will slack off. It is just more prediction engine stuff.
alickz
humans are, at our root, prediction engines
throwup238
At least now we know SkyClaude’s plan to end human civilization.
It’s planning on triggering a Yellowstone caldera super eruption.
mnk47
Am I misremembering or is this an exact plot point of Pluto (the manga/anime)?
quantadev
I think the best use case for AI `Computer Use` would be a simple positioning of the mouse and asking for conformation before a click. For most use cases this is all people will want/need. If you don't know how to do something, it is basically teaching you how, in this case, rather than taking full control and doing things so fast you don't have time to stop of going rogue.
luigipederzani
I totally agree with you. At orango.ai, we have implemented the auto-click feature, but before it clicks, we position the cursor on the button and display a brief loading animation, allowing the user to interrupt the process.
quantadev
That's a great approach! User is still supervising but not having to do anything but watch, most of the time. That's perfect.
accrual
Maybe we could have both - models to improve accessibility (e.g. for users who can't move their body well) and models to perform high level tasks without supervision.
It could be very empowering for users with disabilities to regain access computers. But it would also be very powerful to be able to ask "use Photoshop to remove the power lines from this photo" and have the model complete the task and drop off a few samples in a folder somewhere.
quantadev
Yep. I agree. The "auto-click" thing would be optional. Should be able to turn it on and off. With auto-click off it would just position the mouse and say "click here".
falcor84
Even better, how about giving the AI the capability to move and draw and overlay on the screen with a separate virtual cursor as in a Zoom session?
quantadev
I like the drawing on the screen idea. The biggest use case of that I can think of is drawing a black rectangle over all ADs!!!!
EGreg
People would mostly just rubber-stamp it
But it would slow down the masses
Some people would jailbreak the agents though
HarHarVeryFunny
You'll know AGI is here when it takes time out to go talk to ChatGPT, or another instance of itself, or maybe goes down a rabbit hole of watching YouTube music videos.
edm0nd
ADHDGpt
devmor
Or back in reality, that’s when you know the training data has been sourced from 2024 or later.
numpad0
> Claude accidentally stopped a long-running screen recording,
It's kind of interesting that they're not running a 2PC setup with HDMI splitter, but (presumably)just laptops and screen recording apps...
mfld
It enjoyed nature photos of the Yellowstone national park? I rather expected it would prefer images of big data centers.
szundi
That’s nsfw
sdl
In 2015, when I was asked by friends if I'm worried about Self driving Cars and AI, I answered: "I'll start worrying about AI when my Tesla starts listening to the radio because it's bored." ... that didn't take too long
waffletower
Maybe that's why my car keeps turning on the music when I didn't ask -- I had always thought Tesla devs were just absolute noobs when it came to state management.
TeMPOraL
With state management implemented as sophisticated enough ML model, it stops being clear whether the noob is on the outside or inside of the system.
nopinsight
This needs more discussion:
Claude using Claude on a computer for coding https://youtu.be/vH2f7cjXjKI?si=Tw7rBPGsavzb-LNo (3 mins)
True end-user programming and product manager programming are coming, probably pretty soon. Not the same thing, but Midjourney went from v.1 to v.6 in less than 2 years.
If something similar happens, most jobs that could be done remotely will be automatable in a few years.
dmartinez
Every time I see this argument made, there seems to be a level of complexity and/or operational cost above which people throw up their hands and say "well of course we can't do that".
I feel like we will see that again here as well. It really is similar to the self-driving problem.
nopinsight
Self-driving is a beyond-six-sigma problem. An error rate of over 1-2 crashes per million miles, i.e., the human rate, is unacceptable.
Most jobs are not like that.
A good argument can be made, however, that software engineering, especially in important domains, will be among the last to be fully automated because software errors often cascade.
There’s a countervailing effect though. It’s easy to generate and validate synthetic data for lower-level code. Junior coding jobs will likely become less available soon.
aerhardt
> software errors often cascade
Whereas software defects in design and architecture subtly accumulate, until they leave the codebase in a state in which it becomes utterly unworkable. It is one of the chief reasons why good devs get paid what they do. Software discussions very often underrate software extensibility, or in other words, its structural and architectural scaleability. Even software correctness is trivial in comparison - you can't even keep writing correct code if you've made an unworkable tire-fire. This could be a massive mountain for AI to climb.
SoftTalker
Errors not only cascade, in certain cases they have global impact in very little time. E.g. CrowdStrike.
And what is the title element on CrowdStrike's website today? "CrowdStrike: We Stop Breaches with AI-native Cybersecurity"
Can't wait.
unshavedyak
I feel pain for the people who will be employed to "prompt engineer" the behavior of these things. When they inevitably hallucinate some insane behavior a human will have to take blame for why it's not working.. and yea, that'll be fun to be on the receiving end of.
WalterSear
Humans 'hallucinate' like LLMs. The term used however, is confabulation: we all do it, we all do it quite frequently, and the process is well studied(1).
> We are shockingly ignorant of the causes of our own behavior. The explanations that we provide are sometimes wholly fabricated, and certainly never complete. Yet, that is not how it feels. Instead it feels like we know exactly what we're doing and why. This is confabulation: Guessing at plausible explanations for our behavior, and then regarding those guesses as introspective certainties. Every year psychologists use dramatic examples to entertain their undergraduate audiences. Confabulation is funny, but there is a serious side, too. Understanding it can help us act better and think better in everyday life.
I suspect it's an inherent aspect of human and LLM intelligences, and cannot be avoided. And yet, humans do ok, which is why I don't think it's the moat between LLM agents and AGI that it's generally assumed to be. I strongly suspect it's going to be yesterday's problem in 6-12 months at most.
IncreasePosts
That won't even be a real job. How exactly will there be this complex intelligence that can solve all these real world problems, but can't handle some ambiguity in some inputs it is provided? Wouldn't the ultra smart AI just ask clarifying questions so that literally anyone can "prompt engineer"?
bamboozled
This is because it will be absolutely catastrophic economically when the majority of high paying jobs can be automated and owned by a few billionaires. Then what will go along with this catastrophe will be all the service people who had jobs to support the people with high paid jobs, they're fucked too. People don't want to have to face that.
We'd be losing access to food, shelter, insurance, purpose. I can't blame people for at least telling themselves some coping story.
It's going to be absolutely ruinous for many people. So what else should they do, admit they're fucked? I know we like to always be cold rational engineers on this forum, but shit looks pretty bleak in the short term if this goal of automating everyone's work comes true and there are basically zero social safety nets to deal with it.
I live abroad and my visa is tied to my job, so not only would losing my job be ruinous financially, it will likely mean deportation too as there will be no other job for me to turn to for renewal.
weweweoo
If most people are unemployed, modern capitalism as we know it will collapse. I'm not sure that's in the interests of the billionaires. Perhaps some kind of a social safety net will be implemented.
But I do agree, there is no reason to be enthusiastic about any progress in AI, when the goal is simply automating people's jobs away.
runako
> True end-user programming and product manager programming are coming
This means that either product managers will have to start (effectively) writing in-depth specs again, or they will have to learn to accept the LLM's ideas in a way that most have not accepted their human programmers' ideas.
Definitely will be interesting to see how that plays out.
nopinsight
Since automated coding systems can revise code and show the results much quicker than most human engineers can, writing detailed specs could be less necessary.
runako
The bottleneck is still the person who has to evaluate the results.
The larger point is that building software is about making tons of decisions about how it works. Someone has to make those decisions. Either PMs will be happy letting machines make the decisions where they do not let programmers decide now. Or the PMs will have to make all the decisions before (spec) or after (evaluation + feedback look like you suggest).
risyachka
Idk, LLMs have basically stopped improving for over a year now. And in their current state no matter how many abstractions you add to them - or chain them - they are not even close capable to replace even simple jobs.
dimitri-vs
Agreed. The jump from GPT3.5 to GPT4 was truly mind blowing, from GPT-4 to Opus/Sonnet3.5 was pretty good, but if o1-preview really is GPT-5 then I feel like we're seeing the hype starting to collide with reality.
blauditore
> True end-user programming and product manager programming are coming, probably pretty soon.
I'm placing my bets rather on this new object-oriented programming thing. It will make programming jobs obsolete any day now...
zeroonetwothree
> If something similar happens, most jobs that could be done remotely will be automatable in a few years.
I'd be willing to be a large amount of money this doesn't happen, assuming "most" means >50% and "a few" is <5.
nopinsight
Your semantics above is quite compatible with mine, although I hedged my statement with “a few” which could also mean up to a little over 5, like 6. Also I said “automatable”, not necessarily automated due to legal, political, reputational, or other reasons.
I’m curious to understand your reasoning. What would be some key roadblocks? Hallucinations and reliability issues in most domains will likely be solvable with agentic systems in a few years.
mirsadm
It makes me wonder if people that make these claims have an actual job. Because if they did then I doubt anyone could make that claim with a straight face.
unshavedyak
> If something similar is the case, most jobs that can be done remotely will be automatable in a couple of years.
I'm really curious on the cost of that sort of thing. Seems astronomical atm, but as much as i get shocked at the today-cost, staffing is also a pretty insane cost.
girvo
Playing with Sonnet 3.5 this morning with Cline, my API cost to add a decent amount of functionality to my GraphQL server cost $0.1325 and took about 5 minutes. $1.80 is a lot cheaper than my hourly rate… but I’m the one reviewing what it doe to ensure it makes sense
And it got some things subtly wrong though so do I/my team. Interesting times ahead I think, but I’m not too worried about my job as a principal dev. Again I’m more stressed about juniors
TacticalCoder
> This needs more discussion:
"Create a simple website" has to be one of the most common blog / example out there in about every programming language.
It can automate stuff? That's cool: I already did automate screenshots and then AI looking if it looks like phishing or not (and it's quite good at it).
I mean: the "Claude using Claude" may seem cool, but I dispute the "for coding" part. That's trivial stuff. A trivial error (which it doesn't fix btw: it just deletes everything).
'Claude, write me code to bring SpaceX rockets back on earth"
or
"Claude, write me code to pilot a machine to treat a tumor with precision"
This was not it.
simonw
I wrote up some of my own notes on Computer Use here: https://simonwillison.net/2024/Oct/22/computer-use/
logankeenan
Molmo released recently and is able to provide point coordinates for objects in images. I’ve been testing it out recently and am currently building an automation tool that allows users to more easily control a computer. Looks like Anthropic built a better one.
Edit: it seems like these new features will eliminate a lot of automated testing tools we have today.
Code for molmo coordinate tests https://github.com/logankeenan/molmo-server
minimaxir
From the computer use video demo, that's a lot of API calls. Even though Claude 3.5 Sonnet is relatively cheap for its performance, I suspect computer use won't be. It's a very good idea that Anthropic upfront that it isn't perfect. And it's guaranteed that there will be a viral story where Claude will accidentally delete something important with it.
I'm more interested in Claude 3.5 Haiku, particularly if it is indeed better than the current Claude 3.5 Sonnet at some tasks as claimed.
infecto
Seemed like a reasonable amount of API calls. For a first public iteration this seems quite nice and a logical progression in tooling. UiPath has a $7bn market cap and thats only a single player in the industry of automation. If they can figure out the quirks this can be a game changer.
Hizonner
It's just bizarre to force a computer to go through a GUI to use another computer. Of course it's going to be expensive.
nomel
Not at all! Programs, and websites, are built for humans, and very very rarely offer non-GUI access. This is the only feasible way to make something useful now. I think it's also the reason why robots will look like humans, be the same proportions as humans, have roughly the same feet and hands as humans: everything in the world was designed for humans. That being the foundation is going to influence what's built on top.
For program access, one could claim this is even how linux tools usually do it: you parse some meant-for human text to attempt to extract what you want. Sometimes, if you're lucky, you can find an argument that spits out something meant for machines. Funny enough, Microsoft is the only one that made any real headway for this seemingly impossible goal: powershell objects [1].
https://learn.microsoft.com/en-us/powershell/scripting/learn...
botro
And to take a historic analogy, cars today are as wide as they are because that's about how wide a single lane roadway is. And a single lane roadway is as wide as it is because that's about the width of two horses drawing a carriage.
hobofan
With UIPath, Appian, etc. the whole field of RPA (robotic process automation) is a $XX billion industry that is built on that exact premise (that it's more feasible to do automation via GUIs than badly built/non-existing APIs).
Depending on how many GUI actions correspond to one equivalent AI orchestrated API call, this might also not be too bad in terms of efficiency.
Hizonner
Most of the GUIs are Web pages, though, so you could just interact directly with an HTTP server and not actually render the screen.
Or you could teach it to hack into the backend and add an API...
Oh, and on edit, "bizarre" and "multi-billion-dollar-industry" are well known not to be mutually exclusive.
swalsh
Building an entirely new world for agents to compute in is far more difficult than building an agent that can operate in a human world. However i'm sure over time people will start building bridges to make it easier/cheaper for agents to operate in their own native environment.
It's like another digital transformation. Paper lasted for years before everything was digitalized. Human interfaces will last for years before the conversational transformation is complete.
consumer451
I am just a dilettante, but I imagined that eventually agents will be making API calls directly via browser extension, or headless browser.
I assumed everyone making these UI agents will create a library of each URL's API specification, trained by users.
Does that seem workable?
Guillaume86
Maybe fixing this for AI will finally force good accessibility support on major platforms/frameworks/apps (we can dream).
fzzzy
I really hope so. Even macOS voice control which has gotten pretty good is buggy with Messages, which is a core Apple app.
pton_xd
Agentic workflows built ontop of Electron apps running JavaScript. It's software evolution in action!
bongodongobob
Yeah super weird that we didn't design our GUIs anticipating AI bots. Can't fuckin believe what we've done.
swalsh
I suspect these models have been getting smaller on the back-end, and the GPU's have been getting bigger. It's probably not a huge deal.
Get the top HN stories in your inbox every day.
This is actually a huge deal.
As someone building AI SaaS products, I used to have the position that directly integrating with APIs is going to get us most of the way there in terms of complete AI automation.
I wanted to take at stab at this problem and started researching some daily busineses and how they use software.
My brother-in-law (who is a doctor) showed me the bespoke software they use in his practice. Running on Windows. Using MFC forms.
My accountant showed me Cantax - a very powerful software package they use to prepare tax returns in Canada. Also on Windows.
I started to realize that pretty much most of the real world runs on software that directly interfaces with people, without clearly defined public APIs you can integrate into. Being in the SaaS space makes you believe that everyone ought to have client-server backend APIs etc.
Boy was I wrong.
I am glad they did this, since it is a powerful connector to these types of real-world business use cases that are super-hairy, and hence very worthwhile in automating.