Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

markwu2001

PCB Designer on the team:

Holy cow this is awesome! The integration of project context with an LLM enabled more than the sum of its parts when it comes to information access.

When designing circuits for fast iterations, you're constantly managing risk and time expenditure; this leads to copying a lot of "flight heritage" circuits so to speak.

With the LLM, I'm able to get highly condensed starting points for better circuits when it comes to simplicity, understandability, or performance. As of right now, all I have to do is fact check the solution which is much easier than synthesizing one myself. I'm citing the example where I got it to design me a viable high gain analog frontend that meets my gain, bandwidth, noise, and biasing requirements.

boringuser2

I have no idea what you're talking about, but my experience 100% mirrors yours in most cases as a programmer.

GPT-4 produces a lot of my code now.

I'm leap-frogging my team a bit in productivity because they still don't like it, but it's so close to being undeniable.

anthlax

I’ve been unable to replicate this. Could you please show me an example? I ask everyone who makes this claim and have yet to see a concrete example. I just can’t get it to do anything useful for me. I feel like I’m missing the boat!

porkbeer

It wont write your entire program, and you have to already know enough code to know when it gave you garbage, but.. I find I can have it tackle small chunks and in some cases even glue them together in a usable way. It can often remind me of stratigies I would not have thought to use, good or bad. It can also do some basic debugging, including seeing things my tired eyes often miss. That said, you kinda already need to be able to code or you wont know the wheat from the chaff. It feels a lot like managing a remote esl person you will never meet.

MuffinFlavored

I don't get this. I asked GPT-4 to help me build a machine learning program for historical stock prices. It spat back about 8 errors, functions that don't exist, nothing compiled, multiple logic flaws...

Which one of us is lying?...

ugh123

Probably neither of you? I've had better luck coding with ChatGPT when I ask it very concise questions, mainly at the function or small class level, but still asking for enough functionality to feel satisfied. It helps to give it a "shell" of some code, or a starting point of code to build from. I also make sure to ask it not to include 3rd party libraries (I'd consider those under separate discussions with it). It's truly helpful. And that's saying nothing about copilot integration in the IDE which is sooo good.

newswasboring

None of you are. You are using the tool improperly. The tool cannot spit out entire applications yet, especially if all you told it was the final outcome. I see GPT-4 as a junior programmer in my team who is unusually productive. It cannot do the design for me, but executing it is a piece of cake for it.

SanderNL

The largest models like GPT4 have the interesting property to really, really finish what you started. If you start with flaws of any kind, it will continue to produce them. The inverse is true as well.

This is an actual thing[1] and it’s something larger models are actually worse (better?) at. They score higher and higher on the loss function (did I predict correctly), but their utility (does it work) goes down.

Just thought it was noteworthy.

[1] https://arxiv.org/abs/2102.03896

boringuser2

Asking GPT-4 to make you an entire application as a complete amateur is absurd and shows you don't understand the context window.

bombolo

Maybe he just writes really really really simple code all day and faked his way into a developer position? And then his team mates have to fix/redo anything he does?

I've seen it happen several times.

gfodor

I’ve been interested in learning how to design PCBs, I wonder if this tech helps constrain the set of things one ought to focus on learning, with the assumption that AI can fill in other areas.

markwu2001

I love teaching PCB design! As with you, I'm also really curious about AI's contributions to learning. My opinion is that the automation of search and execution increases the impact one person can make so more time can be spent on connecting the dots at a higher level. Whether or not education catches up in time is another story haha. It's pretty open ended right now but it's undeniable that a huge educational gap there.

gabereiser

This is what's been holding me back from PCB design. Like, I know the math, but there's forces at play beyond just V1/R1=V2 kinda stuff. I'm really keen to see if an LLM can assist a hobbyist in making sound, quality, designs that aren't going to break his/her wallet when the sheet comes in and none of it works.

About 12 years ago I designed an RF addon board for an RC car (to toggle lights). Even something as simple as switching with NPN transistors looks good on the bench software but when the chips came it didn't work at all. Did I mistakenly ask for PNP? Ugh! I used fritzing at first (still great software) and then upgraded to Eagle. I want to do something beyond Raspberry Pi/Arduino work but not quite ready to design my own radio/motherboard. I know electricity but I don't know why I would need a capacitor/resistor/transformer other than the very basics.

What resources would you recommend since you love to teach PCB design?

neodypsis

Have you tried KiCad? There are some interesting courses on Udemy about using it for PCB design.

nullsense

How often is it able to get it right first try?

markwu2001

I'll interpret "right" as in giving correct information. I found that it depends on the type of question asked; if it's somewhat general information such as design concepts and design topologies, it impressively nails it 99% of the time for my limited testing period. The main area it gets things wrong relates to very specific information that has only one source: e.g. datasheet specs. I heard the team already has plans to mitigate hallucinations on this front.

Also, I find myself having a conversation as I would with a real person, so it's less of spitting out the right answer for any technical question I might have, but more of a design buddy to consult when you're blocked on your search through the massive electronics solution space. Of course, this is a very preliminary use case and I can't wait to see what other people use it for.

greg

Engineer on the team here. Some of the interesting challenges of doing ChatGPT for circuit design were:

1) How to communicate the circuit connections to ChatGPT in the text prompt. We ended up just using the standard netlist format. Apparently it knows about netlists from its training data. It also knows about datasheets.

2) How to get the needed accuracy and precision for questions like "calculate the resistance needed between component A and B". For that, we adopted the popular langchain framework to integrate specific "tools" into the AI for specific recognizable problems.

arcticbull

Out of curiosity, I noticed in the video that it recommended for a voltage regulator either an LM7805 or an LM1117 - either a super inefficient 5V regulator that has no business in new designs, or a 1.2V low-dropout 800mA regulator that is probably not appropriate to the other components it recommended (i.e. an LM7805 -> LM1117 would be 36% efficient at best).

It then also recommends a LiPo battery with a nominal 3.7V which obviously wouldn't be compatible with its LM7805 recommendation.

It also recommends an ESP8266 which is a 3.3V part so neither of the recommended regulators, nor the battery alone, would be appropriate.

Also, it recommends a 168MHz STM ARM micro - but it already has an ESP8266 with GPIO.

I'm not sure what it's suggesting I build but it doesn't really sound like a working environmental radiation logger ;)

How should I think about this?

rock_hard

In this case it seems to be giving examples of common ICs that are voltage regulators - these answers to high level questions are pretty vague by default.

We've had good results by asking follow up questions like "what power supply IC could I use to power a 3.3v microcontroller from a 3.7V nominal LiPo battery over the whole battery voltage range?".

I asked that question and I got: LM3940 and AMS1117-3.3 (LDO, high drop out but probably ok for low current - one isn't in stock but it doesn't have stock info yet), and TPS62203 and ADP2108 (bucks, both seem appropriate). These answers should get better as Copilot gets access to more and more tools.

arcticbull

btw I hope my comment didn't come across as too negative, like many folks I'm still trying to frame my own thinking around this kind of tooling. Thanks for the follow-up.

jrexilius

My god.. if you gave it real time stock updates to adjust BoM answers to availability that would be amazing!

assimpleaspossi

I find comments similar to yours about ChatGPT all over the internet. I finally took the time to ask that some questions over the last few days. All the answers it gave were mundane and expected from any click bait web site you can find. I Googled the same questions and found similar answers.

Some personal questions it refused to answer or give suggestions because "only you can decide" was the response.

I'm very suspicious of all this. I'm thinking the only real benefit--some may say--is the chat aspect. You can carry on a conversation rather than having to determine a proper Google search over and over again.

Which makes me think there is some hype involved.

outworlder

> Which makes me think there is some hype involved.

There's _a lot_ of hype.

It's a good tool if you forget about any "intelligence" and just think of it as a tool. We don't expect search engines to be 'intelligent', we expect them to return results we are interested in, in an efficient manner.

In the case of this and similar tools, it is a text generator. It will generate _something_ based on your input. In many cases, it will make up stuff because it has to generate something. Note how it will not ask follow-up questions to 'understand' you (because it doesn't understand anything). You have to apply judgment and ask the follow up questions yourself.

One thing I found these tools to be useful is to mitigate the "blinking cursor on an empty document" paralysis. I just asked the "Pulumi AI" to generate some code that, while not really correct to my needs, is a good starting point for modification.

If you ask it to generate something from you, other than just a pre-made response that you could google, then it's more useful.

gfodor

You’re probably not using GPT-4, the internet is being flooded with takes about GPT-3.5’s quality like this simply because the author doesn’t know there is a meaningful difference with the new model.

evancox100

I mean the more fundamental problem with the power supply is it recommending a "USB-to-Serial converter for powering the device." (See second example image.)

I'm overall bullish on AI systems but I'm not sure this space has enough detailed context to train. Much if this knowledgeis in-house/proprietary. We'll see how fast these are able to improve over time.

stavros

Why is the battery alone not appropriate for the ESP8266? It runs just fine off a lithium battery.

arcticbull

Does your module maybe have a built-in VRM? The datasheet said the operating voltage range was 2.5-3.6V. A LiPo battery has a nominal voltage of 3.7V and a fully-charged voltage closer to 4.2V. That's definitely out of spec without a VRM.

joshuanapoli

How do you deal with designs whose netlist or part list exceeds the context size limit?

rock_hard

It’s indeed a challenge

For many applications such as talking to pdfs you can use a vectors db like pinecone

But that doesn’t work well for schematics because for of the interesting use cases you need access to the whole thing.

Here is what’s been working for us

1. Extreme compression and summarization of the data we dynamically put in the context 2. Leverage the extreme amount of world knowledge the cutting edge models have

ugh123

Do you do any fine tuning of the model for circuit analysis and other hardware design literature?

ZephyrBlu

I'm curious if you fine-tuned a GPT model with specific hardware-related examples. Based on the OpenAI docs it seems like a pain to get enough good prompt:response pairs for fine-tuning (A few hundred minimum).

exmadscientist

Senior EE here! I'm not at all impressed with this page because you don't show the tool doing anything interesting. All your examples are undergrad-course or lower level of difficulty. I feel like reading several of the tool's responses actually wasted my time because it took longer for me to read the text than to come up with a better answer. (Though admittedly I've been at this a while. And the check-if-I-screwed-up-the-chip-select one was pretty neat, I did like that!)

Can you demo something with a little more bite to it? Seeing that you can help me save time on actually-meaty pieces of design work would make a lot of difference for me. Maybe something like "design a correctly biased class-AB output stage, with rail-to-rail swing on +/-5V rails, intended for use boosting op-amp output current"? That is an interesting one because (1) it's a small but real, useful, meaty problem with (2) a lot of examples out there in the world to copy from with (3) a nontrivial added constraint in rail-to-rail out but (4) even solutions that aren't all the way there are useful.

kramerger

Not a senior EE, but equally unimpressed.

The thing about LLM is that it answers based on data it has seen before. So if enough posts on stackoverflow in 2021 claimed mps340 is the best CPU for beginners, that's what the AI will recommend.

There are 3 problems with this: it can and will be gamed, it is usually more conservative and not on the bleeding edge, and finally you can't ask questions you did't know you needed to ask so feature products built by random dude using chatgpt will have reliablity and safety issues.

spacebanana7

> The thing about LLM is that it answers based on data it has seen before.

Modern LLMs are able to perform web searches to make decisions on contemporary data. Once they have proper API support your concerns should be resolved, hopefully in a few weeks.

> reliablity and safety issues.

The solution to this is fine tuning / RHLF. OpenAI have done a pretty extensive job at getting political safety for ChatGPT with RHLF. It seems reasonable that RHLF could achieve a similar result in the hardware domain.

> you can't ask questions you did't know you needed ....

Solvable by prompt engineering. You can wrap user input in a prompt. As a toy example: "Here is user input $userInput if you have safety concerns about their project please respond with questions you think the user forgot to ask". Might also be possible to tweak with fine tuning/RHLF.

exmadscientist

"RHLF" is Reinforcement Learning from Human Feedback? (Strange acronym.)

I don't see how that helps a a tool become useful to a very skilled person, if the "human" side of things is polluted by 95+% of users having very low skill. It's great that you can train LLMs on the world's best reference material! But I don't see how you can get the world's best updates into that training set without hiring the world's best experts. So the tool will have very little value for anyone above a certain skill ceiling. Search has already fallen victim to this effect (I'm tired of results pages full of beginner material when I have a deeper question!) and I cannot see this being better for augmentation training sets for LLMs.

ur-whale

> I'm not at all impressed with this page because you don't show the tool doing anything interesting.

a) has there been anything like this before?

b) give it a year and see what it does

potatofrenzy

I've been playing with this for a while. In my experience, current LLMs work really well for looking up general electronics knowledge and reinterpreting it in the context of your problem. "What are the reasons my resistor-based voltage divider isn't working", that kind of stuff. Not useful for seasoned EEs, great for hobbyists.

But the moment you're asking LLMs to reason about the specs or the applications of specific chips, they will give made-up answers around 80% of the time. This is probably a matter of the data being a bit too sparse. Pick a chip and ask about its supply voltage range, and it will probably get it wrong, even for the most popular stuff.

exmadscientist

> the moment you're asking LLMs to reason about the specs or the applications of specific chips, they will give made-up answers around 80% of the time. This is probably a matter of the data being a bit too sparse. Pick a chip and ask about its supply voltage range, and it will probably get it wrong

But isn't your supply voltage example exactly the sort of stuff they should get right? It's just regurgitating data sheets.

I kind of expect a tool like this to be able to operate in one of two, theoretically distinct (but maybe not practically distinct) modes. One is basically "blueprints", very much in line with the example I suggested earlier: draw me up one of these that fits in here. I wouldn't expect creativity, just more or less rote execution of a common plan. The other mode let's call "tuning": continuing with the class-AB stage example, this would be things like setting the biasing resistors correctly. That's a tedious task, possibly requiring simulation to do well, possibly just copyable, but with a large margin for usable results. I may not care exactly what I get as long as it's somewhat workable, which would be a good place for machine help. Or I might just want a decent starting point for my own simulation. I think "AI" techniques could handle either of those general modes and produce useful (if imperfect) results that save overall time.

What I don't expect AI to do is design architectures for me, or pick approaches. A tremendous amount of my value-add as an engineer is just me saying "no, don't choose approach A, I know it looks nearly equivalent to B on paper, but B is going to work out better". AI is not there now and I don't see LLM-style AI getting there any time soon, just by its nature. (At least not when it's a genuine judgement question and not an education/"A is never right" situation.)

What I don't think AI can do but I really, really wish it could is help me pick parts. All it needs to do for this is read and halfassedly-understand a bunch of datasheets, more datasheets than I can read myself. I think LLMs can do that! Though they might need to also be able to read graphs or simple diagrams ("Find me an EEPROM in the 208-mil SOIC package" is a great prompt and seems thoroughly actionable! But sometimes that dimension is only provided as part of the package drawing, which might be an issue.)

Recently I needed a PMOS FET with particular specs and was getting frustrated not finding quite what I wanted. So I asked ChatGPT. It was great at making me laugh and giving me a little break from datasheets! It did not actually help find a part. (It kept suggesting NMOS parts when I needed and asked for PMOS.)

bombolo

So, kinda the same level for software development. But people keep repeating that I will be unemployed within 1 year.

rock_hard

Founder here:

Incredibly stoked to announce this today!

This is a dream a long time in the making. When we first thought of rebuilding the hardware design tooling stack from scratch, AI driven design seemed like the dream at the end of the tunnel...only 3+ years later we are here

Given the open ended nature of AI Expert agents we are launching a community preview. We are committed to keep building http://flux.ai in the open and are excited to discover this new realm with all of you.

Hit me with questions

steve_adams_86

If this works well, are you interested in training a model specifically around the domain you work in so you could get deeper integration with your product?

I have a personal project similar in concept to flux, but rather than working at the circuit level, it's at the device, firmware, and infrastructure level. I've been trying to use GPT to intuit how to network devices using different protocols, such that it might be easier for lab techs (for example) to discover how to manage and maintain their experiment tooling and infrastructure. It's really hard, though. The outputs and level of integration are mind-bendingly good, but... It's not perfect, which makes it too clumsy or even dangerous for what I want to build.

I think I envision something like GPT transforming human language into prompts for a narrower model with more predictable outputs, which is far more aware and capable with the application's logic and data. The user experience could then give you the conversational ease, yet far better application integration as well.

Anyway, just curious what you're thinking and very excited about projects like this. This is really cool.

rock_hard

Yea def looking into this

We already use a multi model approach and mix of non deterministic and deterministic tooling in the pipeline to optimize for a given task

Lots of more work to be done here

heyzk

The flux docs indicate that you're using a custom-trained LLM and in another comment it was mentioned you're using LangChain (to integrate non-deterministic tooling perhaps). Just curious, but are you using some combination of your own model, GPT3/4, and calculators / datasheet readers / etc?

nunuvit

Have you reevaluated the decision not to implement much by way of basic interface features like symbol and footprint editors for such a long time? It's a real barrier to even playing around with the software. I want to see you succeed, and my experience in the EDA industry and as an electrical engineer is making me wonder about that approach.

It makes sense if you're just shopping it around to the EDA incumbents since they'd have you redesign those parts anyway, but the fact that you currently offer a paid subscription to users makes me think that either you've underestimated or I've overestimated the importance of having those features by now. I guess that could also make sense if you want a more manageable user base for now.

rock_hard

Not sure I follow…maybe there is a misunderstanding here

We do have symbol and footprint creation and editing capabilities…in fact the vast majority of parts on the platform have been created by users.

Now what we don’t do is offer a modal editor like legacy tools do…instead it happens all in the same mode, which simplifies a lot of the UX and also provides powerful new abilities such using a schematic to create simulation models.

Now that said I think we can do a lot more here to further improve the experience and we def will but you are right in that it hasn’t been much of an issue which is because you have access to a vast preexisting library in the platform

nunuvit

The last time I tried to evaluate it was a long time ago and it seemed to be all svg uploads. I had to re-read the docs more closely after reading your comment. It says you need to enter svg only for non-rectangular symbols, but it's not clear if you can create or modify a pad shape other than circular or rectangular from within the tool. Obviously good progress, and more than I understood when I wrote my previous comment. That's minimum viable for many to start using it, so my concerns are addressed.

The single mode thing is cool.

causi

GPT is really bad at math. Doesn't that mean it's also bad at doing the math for designing circuits?

steve_adams_86

Maybe not. It understands data sheets, and seems to be able to connect the dots based on probabilities around connecting components together. If you don't ask it to do large-scale design tasks, it's okay at narrowly scoped tasks like this.

If it's wrong, you can determine that during simulation. You could also assert that in the background, I suppose — prune the bunk suggestions by the language model before presenting them.

In my own experiments I find GPT can succeed in complex tasks if you prompt it multiple times at different steps of the task, and have some level of recursive validation and re-prompting to keep it in line. In a case like this I have a feeling it would do a great job, and would be especially useful for the discovery/exploration phase of a new project.

hammyhavoc

Here's another interesting idea: the marketing blurb mentions reducing costly errors—but with the amount ChatGPT hallucinates, how does this work? What steps have been taken to mitigate hallucination?

rock_hard

LLMs do indeed hallucinate.

What’s been working well for us is to give it access to factual data sources such as calculators, component data sheets, etc.

It does really well from our tests in the past few weeks, so well that we decided to invite everyone to explore the boundaries with a community preview

Just give it a shot and report back

xyzzy123

Have you tried recently? I am getting surprisingly good results lately with GPT-4 on basic physics problems with no prompt engineering (just stating the problem). It seems to recognize when it needs to "think step by step" and just do it.

rock_hard

I hope it would not lol

For starters: From our testing it’s really extremely good at providing the right formulas to solve the problem at hand.

But you are right, these models are inheritely non deterministic and therefore can struggle with things like math. The solution is to provide it access to tools such as a calculator which works really well.

jareklupinski

when I'm building circuits, usually I know what I want to design on a high level, but once it comes time to turn the schematic into a BOM, I get bogged down in minute differences between similar components

e.g. most recently, I needed a boost converter with at least 300mA sustained I at 3.3V coming from the lowest V source possible (alkaline battery powered gadget) After an hour or so of searching I had to choose between TLV61220 and TPS61021A . Maybe there is a better third chip? At this point I ran out of time to research and just figured this the TPS61021A would do.

Digikey/Mouser/etc have parametric filtering (when it works), but optimization among three variables is not the strong suit there, especially when you have, say, an 8mm square to fill and cost is a factor (and availability/flexibility between footprints)

I would love to use this as a fair recommendation engine (that isn't sponsored by TI, skewing all results towards their products) where I can give it my circuit's minimum/recommended/maximum parameters and have it suggest applicable parts, or even take my finished designs and suggest cheaper alternatives for parts that match the design's footprints but have lower capabilities, if that's all my circuit needed in the first place.

(just threw my query into chatgpt and it suggested https://www.ti.com/product/TPS61221) not bad.

newswasboring

I would love to hear/read about your journey. Especially what your thoughts were at the start. There wasn't much like ChatGPT when you started 3 years ago, wasn't that fog scary af? How many times did you pivot/change your AI models?

rock_hard

We knew that we had to build a foundation first to have any shot at getting AI right.

I worked at Meta for years and knew first hand that ML/AI can only ever get good with loooots of training data.

So we needed a way to produce lots of high quality training data first and a way for users to be able to reason about results and edit them.

There is a graveyard of failed AI hardware design projects and one thing they all have in common is that they are black boxes…its very obviously the wrong approach. But every week a new one pops up…I don’t even understand why anyone funds this stuff anymore frankly…it can’t work.

So we embarked on building our own eCAD editor first…that was gonna be step number one. It was also the hardest and most expensive way to go about it… but if you have big dreams then you just have to go and do big things. I recent these get rich quick schemes personally anyways…I love doing hard things!

LLMs just starting to get good around the time we had the editor ready for public launch (after years of beta) was a happy coincidence that makes us look like geniuses now. But I take it lol.

We had been actively playing on the side for the past 6 months on ideas for first AI features as we geared up for launch of the core editor but the critical insight to use our existing projects comments surface was a last minute brain fart that came to us about 6 weeks ago.

Things went pretty quick from there and we are moving extremely fast now that we have the right foundation.

Excited times!

gowld

> rebuilding the hardware design tooling stack from scratch

This kind of talk is tacky and repels technical users.

masto

Opened site. Chat popped up in the corner and started making sound effects. Closed site.

I guess it doesn't matter if a few people are annoyed, if the end result of these things is a new increase in conversions. But I wish "don't annoy people" was actually in the product requirements.

LesZedCB

> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

https://news.ycombinator.com/newsguidelines.html

ngngngng

Just flag the comment

jrockway

Most "annoyance blocker" filterlists get these. In this case it's intercom.io which I block with Adguard's Annoyances lists: https://adguard.com/kb/general/ad-filtering/adguard-filters/

satvikpendem

I can't imagine using the internet without uBlock Origin, SponsorBlock, and a whole host (no pun intended) of filter lists.

undefined

[deleted]

spaceman_2020

Ah the trusted HN top comment that’s never about the product but a complaint about a minor marketing annoyance on the website.

rideontime

Still wondering about your friends' MegaCryptoPolis investments

spaceman_2020

What are you talking about?

samcheng

uBlock Origin on Firefox hid that annoyance for me.

mhb

I recently used ChatGPT to find a part. Maybe this was an anomalous result, but it was wrong. Don't know if this will be an issue with your project.

Me: Is there a digital to analog converter with an 8V analog range and serial input?

CGPT: Yes, there are several digital-to-analog converters (DACs) available with an 8V analog range and serial input. One example is the AD5541A DAC from Analog Devices. It has a 16-bit resolution and can accept serial input in the form of SPI or QSPI signals. The output voltage range is programmable and can be set to 0 to 8V using an external reference voltage.

Another example is the MAX5389 DAC from Maxim Integrated. It has an 8-bit resolution and can accept serial input in the form of I2C signals. The output voltage range is also programmable and can be set to 0 to 8V using an external reference voltage.

These are just a couple of examples, but there are many other DACs available from different manufacturers that meet your specifications.

Me: Those parts are limited to 6v max

CGPT: I apologize for the mistake in my previous response. You are correct that the AD5541A and MAX5389 DACs have a maximum output voltage range of 5V, and they cannot meet your requirement of an 8V analog range.

However, there are other DACs that can meet your specifications. For example, the DAC5578 from Texas Instruments is a 16-bit DAC that can operate from a single 5V supply and has a programmable output range up to 10V. It also supports a serial interface with SPI or MICROWIRE protocols.

Another option is the MCP49xx series from Microchip, which includes several DACs with output voltage ranges up to 10V and a serial interface with SPI or QSPI protocols.

I hope this information is helpful, and please let me know if you have any further questions.

steve_adams_86

I've had this experience as well. I find the trick is the provide as much information as possible and keep the expected solution's scope to a minimum. That flux is including context about the project a user's working on will probably help more and more as the project is built out, which is cool. The second part though, keeping query scopes narrow, is up to the user I guess.

It would be interesting if circuit simulation could be used to vet out suggestions, but you'd need a vast database of components and their pinouts, GPT or something else would need to know how to arrange the circuit in order to simulate them properly, and so on. That's the only way I can think to validate outputs at the moment, though.

pulvinar

Interesting. I tried the same question (each a new conversation, GPT4) and 10V and above it catches, but 8V seems to be in its blind spot. Maybe it hasn't learned anywhere to take absolute maximum ratings seriously?

pulvinar

Update: after asking it to explain absolute maximum ratings and check that for this part (which it knew correctly) and pointing out its error, it finally suggests a reasonable part, the AD5758.

It's clearly missing experience with magic smoke.

hammyhavoc

Can report similar experiences.

Just signed up for a free account, will give it a try, but expecting hallucination to be a major issue.

CamperBob2

Same here, when asking for an N-channel complement for a P-channel power FET. It would get some of the parameters right, such as dissipation and package type, but would get others wrong, like insisting on 800V Vds parts when the original target was a 50-volt part.

Promising, but not quite ready for prime time yet.

numpad0

ChatGPT talks all from its memory, they're not suitable for hard data. Bing Chat type of architecture is more suited, though it still couldn't find a part matching that criteria.

TT-392

Similar experience here, the only thing I have found it to be somewhat okay at, is figuring out what would be the somewhat "jellybean" version of a component I am looking for. It is sometimes kinda difficult to figure that out through the usual parametric search. Though, even then, it has recommended me components that are very much not what I asked for, stuff as simple as recommending me an npn instead of a pnp.

gowld

This report is unhelpful without the GPT version you are using.

mhb

ChatGPT Mar23

porkbeer

Thats the curent date for both 3.5 and 4, which are you using?

negative_zero

I don't mean to sound like a real downer: but I don't see this tool "getting good" for the simple reason of: what are you training it with?

I thought ML needs huge datasets? This is great for GitHub's Copilot as there is plenty of high quality, production, open source code they've used without authorisation from the creators (and possibly proprietary code).

But this dataset just doesn't exist for hardware development on the internet. Almost everything on the net is hobbyist stuff. Great for hand building 10 on a desk but absolute rubbish for a (even horribly) manufacturable and COMPLIANT product.

Further: an anecdotal, but to me very telling, one liner from the video "the component is "not use for new designs" but we can ignore that because there's 260,000 of them". I have personally sniped larger quantities than that from digikey. "Not for new design" means you don't use it in a new design. PERIOD. And ironically, for something like a Murata cap, they've probably literally changed one letter in the PN because they've made a tiny process or recepie change and its more a "move everyone to the new iteration" process.

I do sincerely wish them success and maybe they can carved out a small bit of the hobbyist market but that's probably it.

_ea1k

That's generally true for ChatGPT generating code as well. Nevertheless, I've been able to ask it to solve some semi complicated coding problems, along with unit tests. The code that it generated was structurally better than a lot of junior to mid level engineers.

It also made some similar mistakes to what you describe, with silly things like the wrong parameter names. It still saved a lot of typing compared to doing it all manually. It didn't take long to fix it either, it just required fairly decent software engineering skills.

How is this relevant to circuit design? Well, in the same way it could potentially be an effort amplifier for people who already thoroughly know what they are doing.

AI is on the verge of becoming a really powerful lever.

rock_hard

I think you should give it a shot and see for yourself

This is a first step today, it’s neither done nor perfect…but it kept blowing our and our our most senior testers minds and so we decided to not withhold this incredible technology and rather invite to open experimentation and learning.

aloer

The timing could not be more perfect. Yesterday I realized that I need to create my very first PCB to repair and smartify a device that I broke. Today I thought I should use ChatGPT to better understand how things work. And now this.

There are a lot of space constraints so first thing is figuring out if it is even possible to build as small as needed. Usually I would probably be better off starting to build big first and then scale down. But I feel confident that I can already get quite far with the amazing resources out there.

I spent hours watching videos about soldering, PCB design, some of the components I think I would need (very flat flexible cables, small ESP32 SMD, no space for USB) and then how to start actually doing the work. My last exposure to electronics was in grade 8 and that was very heavy on cables.

This 3h video here was very helpful to understand the process. It's an overview of building your own ESP32 dev board with EasyEDA from start to finish: https://youtu.be/S_p0YV-JlfU

I like these top-down big picture kind of things that go over everything in just enough detail to know what to search for. Does something like this but with flux.ai exist for complete beginners? The tutorials seem more focused on users switching over from other PCB design tools.

Every step of the way I am just amazed how today I can get physical things custom made and shipped within days.

And then there's fun videos like this here: Small scale fully automated PCB production in what is basically a garage -> https://www.youtube.com/watch?v=ZH6ljHUx_IM&t=800s

jacquesm

That last video... drool.

drones

As someone who is not well versed in the current LLM paradigm, how are the LLMs for technical domains trained? I can imagine how a model based off interent text would make for a _general_ conversational partner, but I'm unsure how a model would produce accurate information from such a niche prompt.

Also curious: has anyone created an LLM for circuit schematics trained on HDL code?

Very cool application. It's nice to be reminded that people are actually trying to make useful products in a world where it seems everyone is trying to sell you a solution with no problem.

rock_hard

For starters you can feed it every book on electronics and every datasheet out there

That plus broad world knowledge and reasoning abilities gets you very far

And then you dynamically provide proper context and prompting as well as deterministic tooling such as a calculator to and voila…you got your new favorite virtual team mate :)

fortyseven

Why on earth would you call this "copilot" when GitHub has already long established that as a name in this space? :I

paulgb

I believe aviators have been using it longer.

gojomo

Indeed, but under trademark law, the context-of-use matters, and is what allows generic terms from one domain – like 'copilot', 'apple', 'slack', 'y combinator' – to become defensible trademarks in another commercial-offerings domain.

From this landing page's headline & 1st paragraph, I was still thinking they were talking about the broader-market, better-known Github Copilot – also a for-pay LLM-driven coding assistant! – in some way. In the 2nd paragraph, I saw they were just reusing the name of someone else's LLM coding assistant for something different that they independently trained.

Note also the current submission headline here is "ChatGPT for Hardware Design", not the page's "AI For Hardware Design with Copilot".

Did the submitter not use the usually-requested original page title, because they knew `Copilot` would be misunderstood? (They created another misunderstanding via their change: ChatGPT is only an analogy for what Flux.ai's product is doing, rather than a contributing part of the system.)

gowld

If they aren't using ChatGPT in Flux.ai, that's textbook trademark misuse!

Check out my new product -- It's Windows for Raspberry Pi! (Actually a Ubuntu port)

Czarcasm

Github can't monopolize the word "copilot". It's a basic english word to describe someone that is assisting you to steer a vessel/team/project. It's been around for hundreds of years in naval industry, and in aviation since the dawn of flight.

gowld

Oh they can if they get a trademark. It's a navigational term, but not a computing term. Inventing a new meaning for a word standard use of trademark.

iamflimflam1

Was wondering the same thing - just seems a bit daft and is going to really confuse people.

UncleOxidant

Yeah, this seems like a lawsuit waiting to happen. Rename it to 'Solder' or something before that happens.

celeritascelery

GitHub doesn’t own the copilot trademark. I would actually like to have these assistants called “copilots” instead of “coding LLM”. Since these are going to be a part of our lives let’s use a useful and memorable name.

gojomo

Simply via commercial use, Github could choose to assert a common-law trademark on 'Copilot' within their field-of-use. But also, they've registered "Github Copilot" at the USPTO:

https://trademarks.justia.com/974/60/github-97460083.html

zlstone1992

This is absolutely been a long-wait implementation of GPT.

What is the process of adding component to the list? Additional, could rule to been user defined (to minimum EMI/EMC, e.g.)?

rock_hard

Yeah you can create your own components or work with the over 250k we have in the public library.

Flux is designed much like GitHub in this regard…you can fork other people projects and what not.

You can already ask it today to perform EMI/EMC checks for you…and we have started thinking about using it to write static rules for you among other things

amelius

What I want most is simulation models of commercial components, and a good simulator...

Oh and something that can read datasheets for me, and extract important parameters (or convert them to spice models).

A tool that can hunt for cheap components would also be nice.

Apart from that, I don't think I like to have any fuzziness in my workflow.

rock_hard

Yes, us too!

We are working on all of this right now and have a bunch more announcements over the coming weeks and months.

Stay tuned!

Daily Digest email

Get the top HN stories in your inbox every day.

ChatGPT for Hardware Design - Hacker News