I can't even imagine how many joules would be used per function call!
As an experiment, it's kind of cool. I'm kind of at a loss to what useful software you'd build with it though. Surely once you've run the AI function once it would be much simpler to cache the resulting code than repeatedly re-generate it?
They're handy for situations where it would be impractical to anticipate the way your input might vary. Like say you want to accept invoices or receipts in a variety of file formats where the data structure varies but you can rely on the LLM to parse and organize. AI Functions lets you describe how that logic should be generated on-demand for the input received, with post-conditions (another Python function the dev write) which define what successful outcomes look like. Morgan wrote about the receipt parser scenario here: https://dev.to/morganwilliscloud/the-python-function-that-im...
(FYI I'm on the Strands Agents team)
I've used stuff like this for a hobby project where "effort to write it" vs "times I'm going to use it" is heavily skewed [0]. For production use cases, I can only see it being worth it for things that require using an ML model anyway, like "summarize this document".
[0] e.g. something like the below which I expect to use maybe a dozen times total.
Main routine: In folder X are a bunch of ROM files (iso, bin, etc) and a JSON file with game metadata for each. Look for missing entries, and call [subroutine] once per file (can be called in parallel). When done, summarise the results (successes/failures) based on the now updated metadata.
Subroutine: (...) update XYZ, use metacritic to find metadata, fall back to Google.
The initial version on GitHub does not implement caching or memorization but it's possible and where the project will likely head. (FYI I'm on the Strands Agents team).
As long as there's a country willing to build and sell ESP32s, I think it would be fairly easy to get hold of them. How does a customs agent distinguish between an ESP32 and another microcontroller? These things are in every gadget. Is a government really going to ban all electronics?
Just look at how ineffective governments are at stopping drugs. If people are motivated to smuggle things, they will. Is there going to be a booming black market in ESP32s? Probably not. But will motivated people manage to import them? Almost certainly.
The power imbalance is not in favor of the individual citizen. Fairly simple to enact a law saying "unlicenced importation of electronic devices is an offence", only license major retailers, and have Customs seize anything that doesn't come with the right paperwork attached (which they already do). Drugs are far easier to make than silicon chips, despite how clever people like Sam Zeloof may be.
To have a firearms permit here, I need a "Good Reason" - that's the language from the law verbatim. "I like guns" is not a Good Reason. In that vein, what would be your Good Reason for receiving an import license to bring in technology which is apparently widely used by radicals to defy duly-ratified legislation about communications visibility and enable the creation of side channels which break the law and can be used to proliferate CSAM, drugs, and terrorism? I'm sure any sane person would agree that those are bad things which need to be stopped. Perhaps you should take up a different hobby, like jogging.
> despite how clever people like Sam Zeloof may be.
You don't need to fabricate silicon chips to create radio. You need conductors, resistors, and electricity. Almost every person currently alive has several objects transmitting radio signals within arms reach.
> The power imbalance is not in favor of the individual citizen.
Yes it is. Because the cost is so fucking trivial that it costs magnitudes more to send someone to find a transmitter than it takes to make a dozen transmitters.
1. Nobody cares enough to do all this except some nerds on HN.
2. Spurious radio transmissions from your spark gap set will be tracked down in an afternoon by government foxhunters, and then you'll be in jail for breaking the law.
I don't understand why people think they can meaningfully kinetically resist. The discussion now needs to be convincing the random voter why this is a problem for them, or the game is lost.
There's nothing preventing both from happening. By framing it as an "or" situation rather than an "and" situation you are acting as the type of person you're criticizing.
First off, guns aren't a subcomponent of a vast majority of modern items. The ESP32 was an example but the reality is anything with a radio. Be it WiFi, Bluetooth, or anything.
Second off, guns are incredibly easy to make. Easy enough that they make them in prisons and Japan. But you know what's a million times easier than that? Radio. It's a common first electronics project. You can literally make it out of a few resisters, capacitors, and some wire.
Literally the cost of fighting this type of technology is taking down all wireless infrastructure. ALL of it. And even then it's still a god awfully expensive thing to fight because anyone with a hot pointy object, an electricity source, and some things that are slightly bad at conducting electricity can make a radio
> All electronics that can be freely programmed by the owner, not impossible.
I'm not sure that is possible. Most chips are reprogramable. You think your cheap electricians are going to put in high security defenses?
Even Google and Apple can't keep themselves from getting jailbroken. You think that's going to be true about a $5 toy with a WiFi or Bluetooth chip in it.
There are some use cases where exact time is very important. Warming milk for a baby for instance - it’s pretty low volume and the difference between 30s and 40s is huge. I used to favour the 2 knob microwave, but since having to do that a lot I’d always choose a digital timer. Some have decent interfaces.
The CDC recommends against heating milk in a microwave[0] whether it's human milk or formula meant for a baby due to the creation of "hot spots" and also the potential destruction of nutrients.
Even if this first generation is not useful, the learning and architecture decisions in this generation will be. You really can't think of any value to having a chip which can run LLMs at high speed and locally for 1/10 of the energy budget and (presumably) significantly lower cost than a GPU?
If you look at any development in computing, ASICs are the next step. It seems almost inevitable. Yes, it will always trail behind state of the art. But value will come quickly in a few generations.
It's really not "good" for many people. It's the sort of high-persuasion marketing speak that used to be limited to the blogs of glossy but shallow startups. Now it's been sucked up by LLMs and it's everywhere.
If you want good writing, go and read a New Yorker.
I think we are. There's definitely been an uptick in "show HN" type posts with quite impressively complex apps that one person developed in a few weeks.
From my own experience, the problem is that AI slows down a lot as the scale grows. It's very quick to add extra views to a frontend, but struggles a lot more in making wide reaching refactors. So it's very easy to start a project, but after a while your progress slows significantly.
But given I've developed 2 pretty functional full stack applications in the last 3 months, which I definitely wouldn't have done without AI assistance, I think it's a fair assumption that lots of other people are doing the same. So there is almost certainly a lot more software being produced than there was before.
I think the proportion of new software that is novel has absolutely plummeted after the advent of AI. In my experience, generative AI will easily reproduce code for which there are a multitude of examples on GitHub, like TODO CRUD React Apps. And many business problems can be solved with TODO CRUD React Apps (just look at Excel’s success), but not every business problem can be solved by TODO CRUD React Apps.
As an analogy: imagine if someone was bragging about using Gen AI to pump out romantasy smut novels that were spicy enough to get off to. Would you think they’re capable of producing the next Grapes of Wrath?
> I can only assume what you're really trying to say is "AI bad".
Lmao rarely have I seen a strawman so clearly announced.
No, if you want me to be more precise, it’s that generative AI is limited by its (1) its training corpus and (2) the amount of effort put into its prompts. I don’t think either of those statements are controversial to even the biggest AI bull, but let me know otherwise.
Following from (1) and (2), it takes little effort for even the most milquetoast vibe coder to produce something that there are millions of training examples of, and require very little specification in terms of prompting: TODO CRUD React Apps.
That’s not to say that it’s impossible to create compelling content or code with AI, it’s just why go through the effort if all you need is a TODO CRUD React App to e.g. make a grocery list or create a “Show HN”.
This is perhaps true from the "language model" point of view, but surely from the "knowledge" point of view an LLM is prioritising a few "correct" data sources?
I wonder about this a lot when I ask LLMs niche technical questions. Often there is only one canonical source of truth. Surely it's somehow internally prioritising the official documentation? Or is it querying the documentation in the background and inserting it into the context window?
This is a good article. Not because it's got some crazy insights or radical suggestions - but because it's pragmatic and sensible advice for any project. It definitely resonates with my experience - the biggest risk is just losing focus or losing track of what you're meant to do.
It's refreshingly free of buzzwords and rigid "process" too!
Yeah the hardest thing is to focus intensely and have a strong vision for what exactly the output should be directionally. The second hardest is actually getting the project finished - that requires sustained intense focus.
Theres nothing more to it than that. Frameworks etc blah blah blah. Who cares. Get the work done.
You'd need to write an entire hardware abstraction layer to do anything useful. There's projects that do this for microcontrollers - eg MicroPython and Espruino.
Yes, it would need support from lower level code. But then, so does C -- many things that an OS needs to do, such as installing interrupt handlers, changing the current page table pointer, jumping into a target process already in progress, etc., are not part of the C standard.
reply