(www.espressif.com)
Edit: found an article explaining some of their naming logic, and said that the SoC naming will get its follow-up article, but sadly it never happened. https://developer.espressif.com/blog/2025/03/espressif-part-...
(Disclaimer: I work at Intel but this was way before my tenure.)
https://www.bunniestudios.com/blog/2026/baochip-1x-a-mostly-...
Edit - Oops GeorgeHahn beat me to it
I totally wish that a board would come with PoE…
Because as it is right now, powering a fleet of those with USB power supplies is annoying as fsck…
There's two ESP32 boards that have been around for a while with PoE:
- https://www.tme.com/us/en-us/details/esp32-poe/development-k... - https://wesp32.com/
I'm more hopeful for single-pair ethernet to gain momentum though! Deterministic, faster than CANBUS, single pair, with power delivery:
https://www.hackster.io/rahulkhanna/sustainable-real-time-la...
I keep looking for a reasonably priced 10baseT to 10Base-T1L bridge... everything commercial seems too expensive (for me) and the two hobby designs [1] [2] I've seen are not orderable :(
But I'm seeing more commercial options lately, so that's hopeful.
I’d buy in a heartbeat
On that note, why does the PoE capability often add such a big proportion of the price of various items? Is the technology really costly for some reason, or is it just more there's fairly low demand and people are still willing to pay?
The trick is as others have said in what adding it to your design does in terms of complicating compliance design.
[0] https://www.digikey.com/en/products/detail/silvertel/AG9705-...
They have to use a transformer and a more complex control strategy, not a simple buck regulator with an inductor. PoE inputs need to tolerate voltages several times higher than the highest USB-C voltages, so more expensive parts are used everywhere.
Oh, and a cheap bridge rectifier and some signaling resistors to take care of input polarity and signal to the source that we in fact want the approximately 50V that could hurt a device not made for it.
How much of the complexity is a “fundamental electrical engineering problem” and how much of it is just a spec written to solve a different set of problems?
Therefore, wifi is more convenient than ethernet.
You don't need long cables, just a local power source.
Which means batteries that have to be replaced and maintained or cables... So ethernet with PoE or even better SPE (single pair Ethernet) with PoDL (power over data lines which is PoE for SPE) is the best from my point of view
Both solutions require 1 cable per device, but the first solution would require only short and thin cables, and the second solution would require very long cables which I don't know even how to do properly without milling my walls.
PoE is much fewer of those things. Difficult to recommend it these days with wifi being fast and reliable and so widely used. Certainly not for average residential user.
Another point is that mains power in my area can go down periodically. My PoE switch is powered by a Li-Ion UPS and can provide power for about a day.
Use cases like IoT? The very thing this is for?
I have a unique position of having a data set over 8000 APs with 40k unique devices. If you design properly, there is no need for 2.4 ever. 2.4Ghz congestion (with nearly no actual 802.11 traffic) is very high. To the point where the IoT folks are struggling.
Yup. And it's exactly why some of my IoT admins are struggling. There is only so much spectrum to go around.
2.4Ghz makes sense because this tiny device does not need high speeds Wi-Fi connection, and deployment scenarios benefit from 2.4 GHz penetration more.
My application needed both can bus and Bluetooth (though no wifi) so the S3 was one of the only options available. I'm sure the high current draw is because the wifi and ble share the same radio?
I suspect a lot of the things people are using RPi for are better served by things like this (and virtualisation for the heavier end)
This is perhaps lost in the noise but IMO a large deal. PSRAM starting to get serious bandwidth.
I wonder if it will be possible to (ab)use the faster PSRAM interface on the ESP32-S31 as a general purpose 8-bit parallel interface, eg. for ADCs...
I wonder if I at some point can create low power devices with EspHome for home assistant. I assume this should use less power than connecting to wifi?
[0] https://www.cnx-software.com/2026/03/24/esp32-s31-dual-core-...
ARM is a much more mature platform, and the licensing scheme helps somewhat to keep really good physical implementations of the cores, since some advances get “distributed” through ARM itself.
Compute capabilities and power efficiency are very tied to physical implementations, which for the best part is happening behind closed doors.
I wish I could run DiscoBSD/RetroBSD [2] on an ESP32, I like the idea of running on a MCU something that was originally meant for a PDP/11 (2.11 BSD)
Although, I'd like to seem some non-paid blogger head-to-head reviews benchmarking instruction cycle efficiency per power of comparable Arm vs. ESP32 Xtensa LX6* and RISC-V parts.
* Metric crap tons of WROOM parts are still available and ancient ESP8266 probably too.
ESP-IDF, the official C SDK, is a bit more work, and there is drama around platform-io, but it’s significantly more stable.
What do you mean ?
```
# platformio.ini
platform = https://github.com/pioarduino/platform-espressif32.git#55.03.37
framework = arduino
```
[0]: https://github.com/pioarduino/platform-espressif32Other than that it works pretty well. This is if you run ESP-IDF, with bare-metal rust it's either best thing ever or meh. Rust community seems to use stm32 and picos more.
It shocks me even more that any Western customer would do the same with network-connected Chinese chips. But we do.
The Espressif chips are truly incredible value, but what are we doing here?
Is there any doubt that these don't represent a major attack surface if a conflict were to heat up?
If you had network-connected chips of your own design inside every household of your adversary, what could you do with that?
- Early (ESP8622) MCUs had weak security, implementation flaws, and a host of issues that meant an attacker could hijack and maintain control of devices via OTA updates.
- Their chosen way to implement these systems makes them more vulnerable. They explicitly reduce hardware footprint by moving functionality from hardware to software.
- More recently there was some controversy about hidden commands in the BT chain, which were claimed to be debug functionality. Even if you take them at their word, that speaks volumes about their practices and procedures.
That’s the main problem with these kinds of backdoors, you can never really prove they exist because there’s reasonable alternative explanations since bugs do happen.
What I can tell you is that every single company I’ve worked which took security seriously (medical implants, critical safety industry) not only banned their use on our designs, they banned the presence of ESP32 based devices on our networks.