Hmm. A MOSFET for a motor switch might be overkill. Also, there's no flyback diode. So your Pi pretty much will let its blue smoke out when you get a flyback voltage, or at least the MOSFET.
When MOSFETs get hot, that usually means the gate voltage is too low. It looks like your driving the MOSFET gate through a resistor voltage divider. It would be better to drive the gate straight from the PI's GPIO pin. Also a flyback protection diode would be a very good idea.
Read up on flyback diodes. There are additionally numerous motor driver IC's specifically for this purpose.
A flyback is critical in anything involving a magnetic field, especially a collapsing magnetic field from a motor that stops spinning. The field collapse induces a relatively huge current which will be many many many times greater than anything a normal component is designed for.
Not having one is like driving your car down the highway without any breaks. The only way to stop is to crash.
> So your suggestion is actually introducing a second magnetic field, so now you've doubled the chances of blowing up your Pi and or the MOSFET.
No. You haven't fixed the original problem, but your motor is now isolated from the rest of the circuit.
You've just now have a relay with an inductive element instead of a motor. Ultimately, you haven't solved anything (unless you are using an AC motor, and you don't have a triac or something else solid state to control the motor). You still need the flyback diode.
But you haven't doubled your problem by introducing a relay, merely moved the issue to another part.
Now - in regards to the mosfet - many mosfets (not all!) have a built in protection diode between the source and drain. Check your datasheet for details (including what kind of back-feed voltage/current can be handled by them - some may need an added diode with better ratings in detail).
EDIT: Also - some relays have built-in protection diodes (or can be ordered as such) as well (again, check the datasheet). You see this more on relays for automotive applications (ie - standard BOSCH style relays) than ordinary bare PCB relays.
Long and short, motors store power in magnetics when electricity is flowing. When its not, the coil loses its magnetization and turns back into electricity.
Now think of all the electricity as one big wave. Cause that's what it is. If you were running 12v , you can see upwards of 30v surge. This is bad.
The key is to equalize the power on the motor. And that's done by putting a diode in the reverse flow across the terminals. So that big flow of electrons can equalize itself BEFORE hitting other silicon (like the MOSFET or your RasPi).
Ideally, you want to do this for motors, electromagnets, solenoids, and inductors(well, unless you're doing an L-based filter , but aside the point). They all have this magnetic energy->electricity->surge thing going for them.
Yep was going to say exactly this. Anything involving collapsing magnetic fields, eg solenoids, relays, motors, absolutely simply has to have a flyback diode.
Incidentally you can demonstrate this by wiring up the relay so it oscillates i.e. use the NC contacts in series with the coil. Then stick your fingers across the coil. It'll give you a nice shock. Not enough to hurt you but enough to go "hmm, I'm not going to do that again" like licking a 9v battery.
Attempts to measure these spikes on a cheap scope years ago ended up blowing the scope input FET up. Whoops. Good job it was a university owned scope :D
> Attempts to measure these spikes on a cheap scope years ago ended up blowing the scope input FET up.
Yeah - they make special high-voltage probes for this kind of thing (some can go up to 10 Kv and beyond - just depends on how much money you want to spend).
I believe that the main difference between a standard probe and an HV probe is one of resistance; I think the HV probe puts a large value (mega-ohm) resistor into the mix (not sure if it's in series, or between the probe input and ground - see my further note below).
If you have a scope, it's handy to have one around "just in case" if you can afford it. I got lucky myself; I found one for a few dollars at a local Goodwill thrift store (the strange stuff you can find there...)
Note:
Hmm - I decided to look a bit more into this - I guess things on HV probes can get complicated quickly!
So - a basic probe is just a voltage divider with large value resistors (like I alluded to earlier); but as the frequency increases, lots of other weird and fun stuff come into play (and in the comments section of that article, someone mentions special chemicals that had to be added to certain special probes he used in the past!).
Yeah the Tektronix HV probes used to come with an aerosol can and you had to fill them up.
Technically frequency compensation is required across all voltage dividers for scopes so not to accidentally create a low pass filter with the parasitic capacitance in the cables and input circuits. It's all quite fun.
Disclaimer: was an obsessive compulsive scope collector for a while.
Something that also exists (but are mainly used on pure or nearly pure electrical systems - think old-school, pre-microcontroller/cpu ladder logic relays and controllers) are "quencher resistors"; these were resistors placed across the coil to absorb the voltage spike. You sometimes see them on automobiles, mainly older vehicles.
I hope this refers to ring-0-like instructions such as x86 STI, and not some kind of OEM/manufacturer "you can only use this set of instructions if you pay us" type privilege?
Then C++ has failed. If it has so many competing standards and subsets, then it is too fragmented. This is why when you see C++ written in one standard it looks like a whole other language when you look at other C++.
If after a decade there are so many details still obscured from you, then that's pretty much again a fault of the overly-engineered and tacked on language called C++.
Other mainstream language work fine and allow you to still write clean and modern code without having literally dozens of subsets and implementations. So why is it that C++ feels the need to have so many?
On the contrary, the one nice thing about C++ is that it is 'multi-paradigm'. Don't like boost's philosophy of overengineering every little detail? Then don't use boost. C++ without exceptions, RTTI, or even the whole STL? Perfectly fine. Other languages don't have this freedom. It's a double-edged sword of course. Less freedom also means wasting less time with pointless discussions about which combination of C++ features is 'right'.
After using boost in several projects and regretting it each time (slow compile times, painful to configure), I finally learned my lesson and "don't use boost" is now one of my guiding principles when programming in C++.
While I would mostly agree, I would humbly suggest a small nudge in perspective: Boost does not equal Boost. While I passionately hate most of Boost, I find boost::optional to be one of the most useful template classes. So you shouldn't treat Boost as one library, but as a collection of different libraries. Same goes for STL in my opinion.
I think boost has definitely had a positive impact on c++ and the fact that large amounts of it are now in (or had a major influence on) the std library certainly makes it much easier to forgoe using it.
Sure, threads and datetime has been in certain implementations, too. I think anyone can appreciate the fact that these libraries are not compilation hogs (such as phoenix or mpl) and are portable and save a lot if time.
We use C++ because of the C++ the ecosystem not because of the C++ the language.
There really is no other language that fits all of the constraints C++ fits. If there was, I'd be using it. And before you ask 'Have you looked into...' the answer is yes, unless the language is obscure, in which case it lacks the ecological robustness of C++.
Gotta say I'm looking forward to it too, even though I might not ever use it.
There are some things I disagree on with it, but most of those are orthogonal to the domain of which the language was meant to be applied, namely game programming. For that end, I can get behind those.
That said - the main things I don't like are the lack of exception handling, and no automatic memory management. I understand for game programming why these aren't going to be a part of the language. At the same time, I fear that without these, it may relegate the language to something of a niche language only used for game development and nothing else. While it is laudable to think that these features aren't needed, there is a reason they became available - and it mainly has to do with the fact that even "good programmers" aren't "perfect programmers"; we are all humans, not machines.
Features that make the language stand out, though, are the ideas of functions that run at compile time, which allow for a build system that is written in Jai and builds on compile (so you don't need to learn some other "language" just for building your world), the whole AOS/SOA mess that is made simple to implement with only one keyword (kinda neat!), plus the whole "uplift" of code from inner to global usage, with minimal changes, as it morphs from "lines of code" to "capture" to "anonymous function", etc - that's pretty powerful.
The inverted typed declaration syntax for variables and functions will take some getting used to, but that part is very minor. There's one part on this that I question - namely that variables are declared like:
name: type = value
...so:
foo: int = 0;
...but functions are defined as:
name := () -> return_type {}
ie:
bar := () -> float {}
I would have thought that a function would be defined as:
name: return_type = () {}
so:
baz: float = () {}
...thus more like the variable declarations (plus it would allow for turning a variable into a function easier, perhaps). There is probably something about compiler design, parsing, etc that I don't know that pre-empted things? That would be my first guess as to why this difference exists, but I would love to hear the actual reason from the author, if he reads this.
> The inverted typed declaration syntax for variables and functions will take some getting used to, but that part is very minor.
Go, Rust, Swift, Ada, Pascal - languages both modern and old also use that inverted syntax and it's not such a burden to get used to.
> I don't like are the lack of exception handling
If done properly, it's possible to do without exceptions and have more robust and readable code - see for example Rust with its Error type and the try macro/? operator, which I've found I much prefer to exceptions.
I'm not an expert at parsing, but one main issue I can imagine with having the same syntax for declaring functions and variables is that you need to look at six tokens to understand if it's a function declaration or a variable declaration. I say six because you have the name, colon, type, equal, parentheses (certainly you can't use this because then I couldn't declare a variable using parentheses, e.g., foo: int = (4 + 5) * 3;), and finally a pair of curly braces. And even then, I don't know Jai at all so it may be legal to use them in variable declarations.
With the name := () -> return_type {} syntax, you know whether it's a variable declaration or function declaration after two tokens: name (for either), then colon or colon-equal to differentiate the two.
Then by that point of view, C (hello UB), Python, Ruby, JavaScript, Java, among many other programming languages have failed as well.
Using my favorite scripting language (Python) as an example, I always need to check which version is installed on a given OS and then validate at https://docs.python.org/ if all the features I want are actually available on that version.
Not to mention CPython, PyPy, Cython, JPython differences on top of that.
Which other systems language allows the same style of programming while giving the same performance? You pretty much know if and when you need C++ - which is rare unless you live near the hardware.
That said, I'm not sure how you wish to engage people here with that comment. Do you have any specific criticisms of C++ (there are many!) that we can discuss beyond your personal feelings?
I would disagree that C++ is only useful "near the hardware" I have spent the bulk of my career not near the hardware and have found C++ to be the most useful of languages I have learned. My current job is as close to the hardware as I have gotten, I am even on the "Firmware Team" and I feel pretty removed.
If you need interop between two languages you almost always have to drop to C, if you want to do that while keeping high level concepts like object, then use C++.
If you need more performance and you have already maxed out perf in some "higher" language. Then a naive C++ rewrite is often just faster. Then when you start breaking out C++ profiling tools and carefully managing memory you get insane speeds that didn't seem possible before.
With some of new stuff in C++11/14/17 it is actually fun to work in C++, not as fun as Ruby... up until 3d things start appearing on the screen.
>I would disagree that C++ is only useful "near the hardware"
Okay, but I didn't say that? I said C++ has limited uses, _unless_ you're near the hardware, where its much more common. You can disagree, but its better if you disagree with what I actually said :P
But to your reply, Personally, I don't agree that C++ is fun to work with. Like any other language, I tolerate it as and when I need to. And like I said in the previous reply, sometimes C++ is the only choice, and you accept that and do the best you can. I've found that good tooling does reduce some of the headaches, especially when it comes to debugging (although stepping through optimized production code still remains a giant pain in the ass). I'm happy that the language is progressing and with all of the new features, its like a cafeteria where you chose what you like and what works for you and your team. I certainly don't agree with all the advice that the C++ cheerleaders put out on the internet about inserting every new feature into production code. Personally I take a very conservative approach and only use features that have shown tobe useful across the entire dev cycle of - helping to conceptualize an idea, being easy to reason about by everyone on the team, being easy to debug, and being reliable enough for 24/7 execution (though this applies more to the library side of things). My code runs automation machinery, and it has to have zero bugs in it. I choose C++ because currently its the best tool for the job.
I figured that by providing examples of where C++ was required that cases near those were it is used but not required would be evident.
In pretty much every type of application domain there are a large number of C++ applications. It is used in finance, business, games, medical, as you point out it is used very close to the hardware particularly when that hardware is custom and image the web with many fewer browsers and servers. The only kind of developer that might outnumber C++ devs is Java devs because there are so many custom internal enterprise applications that place higher emphasis on being barely functional than performant (and they are largely correct in their decision).
I find no need for your needlessly specific separation of limited vs whatever my wording used. I hope that whatever your take from this, should you choose to continuing nitpick, is that C++ is hugely useful in many places it just is new and glamorous so it doesn't get much press.
Fun to work with? Perhaps not. But 99% of the software you're using to post your reply (and generally do just about anything on your computer) is written in some derivative of C or C++. So C++ has that going for it, which is nice.
I remember a long time ago people pointing out that it was actually posted to Digg first. Imgur's creator says it was posted to multiple services on the same day [1], but that he deleted the original Reddit post. The post saying it was a "gift to Reddit" came a day after the post to Digg failed to gain traction.
It was always kind of a weird thing. For reasons mysterious at the time, reddit suspended lots of the self-promotion rules around imgur which in turn allowed it to grow quite rapidly. I had a friend who actually tried to start another symbiotic for reddit startup at a similar time and reddit admins decided to enforce the no self-promotion rules on him even while letting imgur run free. It was very frustrating for him, fortunately he didn't go in for lots of money on it and was able to shut it down pretty quickly when it was clear reddit wasn't playing fair.
A lingering question at that time was how imgur was paying their obviously enormous bandwidth bills, but /u/mrgrimm was always very vague about the source of his funding.
Long story short, it turns out that reddit had made an investment in imgur at one point while earlier funding was never made public. So reddit was banking on imgur being successful but never made that relationship explicitly public.
There were other weird relationship oddities too, e.g. imgur used to not allow pornography to be uploaded and linked to, except when the referrer came from reddit.com. (if you look up info on imgur and referrer tag weirdness, it's a common pattern)
After imgur started jumping the shark and reddit's entire executive team turned over, they decided to do their own image hosting service. But now they had a well funded plan-b to fall back on if their own hosting service didn't work out.
Imgur has always been very vague and shady about funding, revenue, growth plans. It's assumed that they just wanted to be a non-suck
> In the embedded world they're available and not even super new,
I feel you might be understating this! FPGA's and ASIC's have in fact been in use for decades now, they are very much ingrained into many high end products. Any digital oscilloscope has an FPGA, and has done for a very long time, for example. These are often either used in combination with a CPU for the UI part. The CPU might either be an IC on the PCB or what is quite popular is having the CPU be a synthesised one programmed into the FPGA.
ARM, for example, have a whole architecture specifically designed for optimal FPGA synthesis.
Honestly, using ES6+ syntax with Babel and ESLint is a good experience. A comment above recommends Flow for type checking. I haven't tried it so I can't comment.
Static type checking is great in theory but it's just not worth it if you can't get it to play nice with npm modules. I haven't been able to do so reliably in my year of using TS, among other issues. However, don't necessarily take it from me: if you can get it to work, then great, maybe you'll like it more than I did.
If your project has a simple build process, you could still try TS with vscode; it works far better than with VS2015/2017 (that's not to say you won't encounter issues, but it won't be as hellish to diagnose).
Although, IMHO, I would argue that Types just don't fit JavaScript; TypeScript is very OO while JavaScript is not. My TS code is all classes and interfaces. It's quite Javaesque and a departure from the non-TS part of the codebase. It feels a bit too enterprisey and unproductive for the JavaScript ecosystem. This last part is just opinion, though.
This, in addition to the relentless bureaucratic weirdness I encountered, has driven me away. For instance, it doesn't polyfill Promises, Maps, or Sets, even though its front page says it's an ES2015 transpiler. You have to polyfill them yourself, and then you have to get TypeScript not to reject them by providing typings (https://stackoverflow.com/questions/30019542/es6-map-in-type...). You also have to set a config flag in tsconfig.json that tells TypeScript that you're providing those polyfills, i.e. {"lib": [ "es5", "dom", "es2015.promise" ]}. Good luck finding all of this information in one place.
I think TypeScript is secretly the enemy of JavaScript.
as a react/redux developer i tend to write very functional javascript.. We had a CTO try and ram TypeScript into the project and it went over like a fart in church.