So... Any reason I'm not aware of that "tech seniors" would have a higher disposition to using the day care? Btw, I'm not being sarcastic. I know nothing of Google's benefits operations or demographic composition, let alone varying dispositions to use said benefits.
People usually become Senior Engineers in their late 20s, if not mid 20s. It's the fakest job title in the industry. The majority of Senior Engineers are probably age 28-40 - common ages to have young children.
"Senior" really just means "this person is not straight out of college." It doesn't actually mean senior in the same way that Intel and AMD's "2nm process" technology does not actually correspond to any physical dimension on the die. It just means you have some type of experience.
"Protein-deficient vegans" is a joke among omnivores, who assume that anyone not eating meat must be malnourished, and among vegans, who find ignorance of that magnitude hilarious.
I really disliked the book. Probably one of the worst books I’ve ever read.
Some times HN shows well-written and researched blog pieces about events or characters in scientific history. This book felt like the author read a bunch of these and put them together in a book. Then the reality-fantasy aspect of it makes no sense to me.
The reality-fantasy aspect turned me off too. Majorly. On the other hand the writing is pretty terrific. The similies are especially evocative.
> like deducing all the rules of Wimbledon from the few balls that flew out of the stadium, without ever having witnessed what takes place on the court
As much as it pains me to say it, I don't think Julia will.
It looks to me like the practical problems with Julia, while addressable, are being addressed too slowly.
There is simply too many rough edges and usability problems as it is now, and at the current pace it will take maybe 10 or 15 years to address them.
On the other hand, the major use case for Julia is to have a fast, dynamic language. And it seems to me the time horizon for Python to become fast, or Rust or C++ to become dynamic is indefinite, so Julia is still the best bet in that space.
Although I mostly agree with this sentiment and have deep concerns about Julia's flaws, I will say that Julia has a very high ceiling. If the right concerns were addressed in a timely manner, I wouldn't be surprised to see its adoption rate accelerate.
Ahhh, interesting. That looks to me like it's more appropriate for "multiple threads working on independent problems" than "multiple threads working on the same problem" due to the potential serialization overhead and limitations.
Each thread has its own lua_State. However, we provide a serialization scheme which allows automatic sharing for several Torch objects (storages, tensors and tds types). Sharing of vanilla lua objects is not possible, but instances of classes that support serialization (eg. classic objects with using require 'classic.torch' or those created with torch.class) can be shared, but remember that only the memory in tensor storages and tds objects will be shared by the instances, other fields will be copies. Also if synchronization is required that must be implemented by the user (ie. with mutex).
That's not general-purpose multi-processor threading, that's solving it for a very specific sub-problem.
I don't think so, not the core language at least. In my tests I think I saw a 20-30% improvement at best compared to versions from 5 years ago. Not saying that that's still not a great job from the core maintainers, but I don't think Julia has to fear python getting fast. The issue is more that python is getting to the point where the ML ecosystem has already accepted using a slow language as long as everything around it is fast.
Though as I said before, sometimes no amount of c++/c escape hatches can improve performance since you have to use python objects at some point or another and that will be the bottleneck. But by then, you won't needing some of the stuff Julia offers like the REPL and notebooks etc.
I haven't used Julia a lot but to me it's in a weird spot where it would be ideal to start projects with in theory since you won't need to outgrow the language you are starting with, since it's fast enough and has a pretty good/maintainable/sane design. But then you are sacrificing so much and will need much more time to get started that you might not ever get to that point anyways.
Just as an example, debugging obscure problems or deploying pytorch models that are more custom in prod is already pretty daunting at times, and it's the "best" and most popular ML framework in the world. I can't imagine how much more time consuming it would be when using a much smaller/less used library/framework.
So yeah all of that to say that being way faster isn't how Julia will win. Maybe a push from an influent player/big tech might give it the momentum it needs.
Cython is pretty nice, especially for wrappers around C. Julia’s ccall is honestly even better.
Numba is also really nice, and for just compiling against arrays I like it better than Cython. Neither help if you need data structures or abstractions, though.
It has already taken off and found its not so small niche. I don't think it will ever be one of the "big 10" languages (by users), but it has already a _big_ user base.
Any page like that which is industry relevant is going to have to be associated with a company because you need the marketing team to go through the effort to get the case studies approved by each and every company. It's not an easy process since for example something like the Instron case study (https://juliahub.com/case-studies/auto-crash-simulation/) has details of upcoming projects like the Catapult Light which are major cost improvements passed on to customers, but you then have to go through the whole process of "well should we share this and let our competitors know what we used to get this advantage?" and contracts have to be signed before such a page can ever be built. The JuliaHub website has a whole list of case studies which have undergone this process and I couldn't see how you would get such detailed industrial accounts otherwise.
For open source accounts, there's the SciML showcase page https://sciml.ai/showcase/. Thats very focused in just one domain though, and I tend to just update it with what I remember to put in there so it probably only has about 1/4 of the blogs and news articles that it should, and the "External Applications Libraries and Large Projects using SciML" part is woefully incomplete, but at least it gives a picture of what's going on. It's hard to keep those kinds of pages up to date because exponential growth means that page requires exponential work.
Julia was ahead of the game with automatic differentiation which took a few years for Python to get support. And it's still ahead of the game with integrating machine learning with scientific modeling. Macros and multiple dispatch are a game changer, and it allows people to hack and iterate on Julia far easier than on Python.
And don't get me started on how nice JuMP.jl is for mathematical optimization.
Theano existed, but it didn't use autodiff at the time, most loss functions had preprogrammed derivative functions. Python first had native autodiff in June of 2013 with the ad package, but previously had bindings to Fortran and C++ libraries that supported autodiff in different use cases. Julia first had native autodiff in April of 2013 with ForwardDiff.jl.
It seems overly nitpicky to differentiate (no pun intended) so much between forward mode AD with DiffRules and theano's specific flavor of symbolic differentiation, but I'm no expert there.
JuMP is cool, but honestly...I find the native Python APIs from CPLEX & GUROBI to be best. The additional abstraction of JuMP or the Python equivalent frameworks always is a pain to me as I don't need to switch solvers often.
scientific models and simulations can be and are made very successfully in julia. In particular the diffeq landscape is probably the languages largest comparative advantage
That’s a broad area. I work in astrodynamic simulations, and don’t know anyone doing much work in Julia. Maybe a couple of grad students playing with it, but that’s it. 99% of the work is Python, Fortran, and C/C++.
Are there subdomains that use it a lot? I am not sure what the diffeq landscape is exactly although it sounds related to dynamical simulations?
I'm sure there are other subdomains that make use of Julia, but in particular I think if your problem involves writing an evolution-like or agent-based-model-like simulation you may find the strengths of Julia particularly compelling
Astropy [0] lives at the heart of most work. It has a Python interface, often backed by Fortran and C++ extension modules. If you use Astropy, you're indirectly using libraries like ERFA [6] and cfitsio [7] which are in C/Fortran.
I personally end up doing a lot of work that uses the HEALPix sky tesselation, so I use healpy [2] as well.
Openorb is perhaps a good example of a pure-Fortran package that I use quite frequently for orbit propagation [3].
In C, there's Rebound [4] (for N-body simulations) and ASSIST [5] (which extends Rebound to use JPL's pre-calculated positions of major perturbers, and expands the force model to account for general relativity).
There are many more, these are just ones that come to mind from frequent usage in the last few months.
Will look into those. I recently wrote a little n-body simulator to become familiar with Julia's DifferentialEquations.jl and that motivated me to learn more about astrodynamics.
One of the things I don't like about the Julia ecosystem is the monolithic libraries that have tons of dependencies. DiffEq is one of those. I think its fine to write a script but if you want to develop something more sophisticated, you want to keep your dependencies lean.
You can always (slightly) reduce the DiffEq dependencies by adding OrdinaryDiffEq.jl instead of the meta DifferentialEquations.jl package. But lots of those dependencies arise from supporting modular functionality (changing BLAS, linear solvers, Jacobian calculation methods, in vs. out of place workflows, etc.). That said, the newer extension functionality may let more and more of the dependencies get factored out into optional extensions as time goes on.
And you can always use the "Simple" versions, SimpleDiffEq.jl, SimpleNonlinearSolve.jl. Those libraries were made so that users could use exactly the same syntax but with dumbed down solvers with essentially zero latency. But yes the complete solvers are doing lots of fancy things for fancy cases, but SimpleTsit5 is recommended in the docs for small cases where you don't need all of the extra bells and whistles.
That's a bit different, and an interesting different. Certain types of partial differential equations like the one solved there generally use a form of step splitting (i.e. a finite volume method with a staggard grid). Those don't map cleanly into standard ODE solvers since you generally want to use a different method on one of the equations. It does map into the SplitODEProblem form as a not DynamicalODEProblem, and so there is a way to represent it, but we have not created optimized time stepping methods for that. But I work with those folks so I understand their needs and we'll be kicking off a new project in the MIT Julia Lab in the near future to start developing split step and multi-rate methods specifically for these kinds of PDEs.
It's an interesting space because:
-(a) there aren't really good benchmarks on the full set of options, so a benchmarking paper would be interesting to the field (which then gives a motivation to the software development)
-(b) none of the implementations I have seen used the detailed tricks from standard stiff ODE solvers and so there's some major room for performance improvements
-(c) there's some alternative ways to generate the stable steppers that haven't been explored, and we have some ideas for symbolic-numeric methods that extend the ideas of what people have traditionally done by hand here. That should.
so we do plan to do things in the future. And having Oceananigans is then great because it serves as a speed-of-light baseline: if you auto-generate an ocean model, do you actually get as fast as a real hand-optimized ocean model? That's the goal, and we'll see if we can get there.
We have tons of solvers, but you always need more!
A large part of why I started using Julia is because calling into other languages through the C FFI is pretty easy and efficient. Most of the wrappers are a single line. If there is not existing driver support, I would pass the C headers through Clang.jl, which automatically wraps the C API in a C header.
After looking at Mojo, I appreciated all the paradigms that Julia was pushing forward even more than I did before. Mojo's greatest asset and curse is focusing on being a better Python. Julia's greatest asset and curse is trying to do a lot more.
What other private company languages are there? Swift is the best example I can think of, which matches Mojo's situation down to the head of the project.
aren't most languages invented and initially developed within a private company? Go, Dart, Java, JavaScript, C#, F#, VBA, Kotlin, Erlang, C(at AT&T), Rust (at Mozilla). The list is probably very long
Most of those are open source, and while initially developed at a private company, most are run by non-profits (such as the Rust Foundation for Rust). Also, the language itself is not usually the main product at those companies.
The person in charge of Mojo was previously in charge of a massive effort to give tensorflow a new backend, for which he conveniently picked Swift (a language he was also involved in), despite there being better options, only to abandon it halfway through because it was intractable and Swift wasn’t ready/willing to support the compiler changes needed, so he just abandoned it, and now works on Mojo.
It gives real “Jony Ive leaving Apple to go work somewhere where nobody can tell him ‘no’ “ vibes.
I'm not sure the first sentence is correct. "Of course AI is a bubble." If the product is good enough the "every single billboard is advertising some kind of AI company" may just show enthusiasm.
Maybe it will be answered in the first sentience;) Sentient AI would be quite a thing.
...I feel compelled to quote this now... more for the parent of your comment
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
I’m sure this question has been asked plenty of times but what is the best place to start with LLMs (no theory, just installing them locally, fine tuning them, etc)
My general understanding is that 174 classifies more activities like capital assets. If you build a factory, you have to write off that expenditure over time, including the money you pay people to build it. Later, if you repair a toilet, that can be expensed immediately, but if you upgrade the electric system, then that is an improvement and must also be amortized.
Research activity in other, more capital intesive industries has largely been this way for a while. What 174 does is brings software development under that umbrella. This means that most of what we do as devs classifies as building assets, so our salaries must be amortized over 5 years. Previously, companies had a choice whether to do this in 1 or 5 years. Really 174 removes this choice, if I'm recalling all the things my accountant explained to me.
This rule change came as part of the Trump Tax Cuts, as a give to get the full bill passed. If you look at it one way, the idea is to get taxes out of big tech and offshore development. Problem is, big tech can weather the changes while many smaller companies are finding it difficult.
Everyone assumed it would be changed by the time it came into effect, but here we are, with a dysfunctional congress becoming more selfish and partisan. Like the middle class, the middle ground has been disappearing... it's super frustrating as a citizen.