Hacker Newsnew | past | comments | ask | show | jobs | submit | seawlf's commentslogin

I'm building a new terminal multiplexer called cy: https://cfoust.github.io/cy/index.html

It records your sessions and is configurable with Janet, a Lisp. I'm having a lot of fun with it! (It's also the only thing I ever seem to post about on HN, heh)


This is not a hard-and-fast rule. There's hardly agreement in style guides on this note. Don't be a jerk and nitpick style on the internet; it does not add to the conversation. https://english.stackexchange.com/questions/13631/is-an-apos...


I took the class this textbook is used for a couple of years ago at UW. Andrea was a fantastic teacher and it was the easiest presentation of the material I'd ever found! Seriously, read this textbook. It's full of wit and information.


Yes, the scripting and building is all "online" as you put it. There are essentially no barriers to building, all you have to worry about is the primitive limit.


>ideologically more neutral I hardly feel like the global market will accept Chinese alternatives as being "ideologically neutral" in any sense of the phrase.


As someone who has been programming in Rust for nearly a year, even for commercial purposes, this article is baffling to me. I've found the compiler messages to be succinct and helpful. The package system is wonderful. It's dead easy to get something off the ground quickly. All it took was learning how and when to borrow.


I can see where the author comes from. I've been working with ^W^W fighting against Tokio this week, and the error messages are horrible. Representative example:

  error[E0271]: type mismatch resolving `<futures::AndThen<futures::Select<futures::stream::ForEach<futures::stream::MapErr<std::boxed::Box<futures::Stream<Error=std::io::Error, Item=(tokio_uds::UnixStream, std::os::ext::net::SocketAddr)> + std::marker::Send>, [closure@src/server/mod.rs:59:18: 59:74]>, [closure@src/server/mod.rs:60:19: 69:10 next_connection_id:_], std::result::Result<(), ()>>, futures::MapErr<futures::Receiver<()>, [closure@src/server/mod.rs:74:18: 74:74]>>, std::result::Result<(), ((), futures::SelectNext<futures::stream::ForEach<futures::stream::MapErr<std::boxed::Box<futures::Stream<Error=std::io::Error, Item=(tokio_uds::UnixStream, std::os::ext::net::SocketAddr)> + std::marker::Send>, [closure@src/server/mod.rs:59:18: 59:74]>, [closure@src/server/mod.rs:60:19: 69:10 next_connection_id:_], std::result::Result<(), ()>>, futures::MapErr<futures::Receiver<()>, [closure@src/server/mod.rs:74:18: 74:74]>>)>, [closure@src/server/mod.rs:78:34: 83:6 cfg:_]> as futures::Future>::Error == ()`
    --> src/server/mod.rs:85:15
     |
  85 |     return Ok(Box::new(server));
     |               ^^^^^^^^^^^^^^^^ expected tuple, found ()
     |
     = note: expected type `((), futures::SelectNext<futures::stream::ForEach<futures::stream::MapErr<std::boxed::Box<futures::Stream<Error=std::io::Error, Item=(tokio_uds::UnixStream, std::os::ext::net::SocketAddr)> + std::marker::Send>, [closure@src/server/mod.rs:59:18: 59:74]>, [closure@src/server/mod.rs:60:19: 69:10 next_connection_id:_], std::result::Result<(), ()>>, futures::MapErr<futures::Receiver<()>, [closure@src/server/mod.rs:74:18: 74:74]>>)`
                found type `()`
     = note: required for the cast to the object type `futures::Future<Item=(), Error=()>`
I have hope that things will improve on this front when `impl Trait` lands.

EDIT: After re-reading this, I want to add that I don't mean to hate on Tokio. I like the basic design very much, and hope that they can work out the ergonomics issues and stabilize the API soon.


That's nothing!

      |
  212 | /     fn call(&self, payload: Self::Request) -> Self::Future {
  213 | |         let request = self.create_request(payload);
  214 | |
  215 | |         let work = async_block! {
  ...   |
  254 | |         FutureResponse(Box::new(work))
  255 | |     }
      | |_____^
  note: ...so that the type `impl futures::__rt::MyFuture<<[generator@src/client.rs:215:20: 252:10 self:&client::Client,
request:hyper::Request for<'r> {futures::Async<futures::__rt::Mu>, (), fn(std::result::Result<hyper::Response, error::Error>) -> std::result::Result<<std::result::Result<hyper::Response, error::Error> as std::ops::Try>::Ok, <std::result::Result<hyper::Response, error::Error> as std::ops::Try>::Error> {<std::result::Result<hyper::Response, error::Error> as std::ops::Try>::into_result}, futures::MapErr<hyper::client::FutureResponse, [closure@src/client.rs:217:59: 217:85]>, hyper::Response, std::option::Option<std::string::String>, &'r hyper::Response, hyper::StatusCode, fn(std::result::Result<hyper::Chunk, error::Error>) -> std::result::Result<<std::result::Result<hyper::Chunk, error::Error> as std::ops::Try>::Ok, <std::result::Result<hyper::Chunk, error::Error> as std::ops::Try>::Error> {<std::result::Result<hyper::Chunk, error::Error> as std::ops::Try>::into_result}, futures::stream::Concat2<futures::stream::MapErr<hyper::Body, [closure@src/client.rs:234:49: 234:75]>>}] as std::ops::Generator>::Return>` will meet its required lifetime bounds


I see. Rust is aiming to be closer and closer to C++ every day!

Anyways, I think it has still long way to go to match the length of even common C++ template related error messages...


>I see. Rust is aiming to be closer and closer to C++ every day!

Or you know, it's aiming nothing of the short, and this is early, still unsorted, behavior, while the language has been simplifying things (e.g. the early sigils and lots of other stuff), and plans even more simplification and friendliness.

https://jvns.ca/blog/2018/01/13/rust-in-2018--way-easier-to-...

https://blog.rust-lang.org/2018/03/12/roadmap.html


I've definitely seen much worse with C++. These kind of errors you get though when you use Tokio and combine many different futures together, the future wanting to have static lifetimes and me using a reference to self inside an async block, hopefully solved this year.


I have found while the C++ error messes can be very long, its surprisingly often in the first line of the error message that you see the problem.


Also since clang, at least using recent versions of clang, gcc and VC++ they use heuristics to try to present some kind of meaningful message.

And latest C++14 and C++17 changes also help library writers to error check the type parameters.

Of course those that have to use other compilers still need to face the sea of incomprehensible error messages.

And in any case, better stay away from template meta-programming libraries, at least until modules and concepts eventually land.


Coincidentally, `impl Trait` stabilization was just approved, meaning it should be in the next beta: https://www.reddit.com/r/rust/comments/86f3h6/impl_trait_sta... . It will be a crucial step forward for Tokio's error messages.


The grievances you and the article's author mention seem less to do with Rust itself, and more to do with this seemingly horrible futures library. As far as I can tell, it's still in the rust-lang-nursery, which is an indication it's not ready for prime time yet.


Don't get me wrong, it's a great library! You can do pretty darn fast systems with it, all type-checked and correct. Just don't try to fit in too many things into one thread yet, wait for async/await and non-statical lifetimes in the core.


I hope async programming doesn't become the standard in Rust. So much work has gone into allowing clean and safe threading, but people seem to be led towards the async libraries, which IMO solves a scaling problem only 1% of users will have. It's great that they exist, but if you're not expecting to have a c10k class problem, you can use threads and you'll probably have a better time.


If you're a library, creating threads causes side effects for the host program, and doing async when called doesn't. For instance, in a single-threaded program, it's always safe to call malloc() after fork(). In a multi-threaded program, another thread might have called malloc() and picked up a global lock; since only the thread that called fork gets copied (since you don't want to do the other threads' work twice!), there's now a copy of that global lock being held by a thread that doesn't exist, so calling malloc() will deadlock.

My personal interest in Rust is as a C replacement, including as a replacement for existing libraries that are written in C. While I agree threads can be very efficient (after all, the kernel implements threading by being async itself, more or less), they're annoying for this use case.


I just want to be able to say "do this thing or time out in 5 seconds", and that's like all the 'async' I need in rust. Everything else, nah.

But that's just me - many people need these 0 cost abstractions, this is Rust's focus, and I'll have to wait for the higher level stuff for my projects.


> less to do with Rust itself, and more to do with this seemingly horrible futures library

Doesn't almost every major programming language have Futures though? (C++, Java, Python, Ruby, JavaScript, Go).

It seems a fair criticism for so common a building block.


For a long time C++ and Javascript did not have futures. Rust is relatively new, and futures aren't part of the language proper yet. The problem is that the author's criticism of Rust seems to all hinge around a bleeding edge feature.

I would agree with the author's criticism if it was "futures are in the lang nursery and still not ready to use," rather than: Rust is bad, because I got nasty errors when using this work-in-progress library.


Rust's futures leverage the type system to have extremely minimal overhead; many of those languages don't try to do that. That's where the difficulty comes in.


If we'd all start programming in stable, mature languages instead of letting peer pressure goad us into using betaware and worse, work would be a lot simpler.

It would also encourage organizations large and small to start releasing complete, polished products instead of the "move fast and break things" crap that has infected the industry.

Imagine if car makers worked the same way.


Your "stable and mature" languages were the crazy research projects of old. No one is forcing you to do anything, but let us not forget the nature of our "mature" technologies and the process by which they form.


By that logic, programs written in C are the most polished. Yeah, that checks out. /s


Ask a COBOL programmer about this...


Ask an assembly programmer about this...


Ask a machine code programmer about this...


Ask a Minecraft machine programmer about this.... (errr, couldn’t resist)


Isn't that what Uber is doing? :/


That is rough, but after looking at C++ template error messages a lot recently, that error message looks pretty sweet!


Finally, a language that can compete with C++ on complexity and size of error messages!


This is still an unstable library (futures and Tokio are both still 0.1 release) actively being developed, in general I tend not to see crazy compile errors working on synchronous projects. FWIF I’ve made a Chip8 emulator with Rust and am working on a Z80 emulator now.


Yeah, sync projects are just pure pleasure to work with. The language is ergonomic, errors are easy to read and tooling is the best ever.

The problems right now start when you want to go async. I follow the development because I want to see easy, safe and fast way of writing async programs, and there's lots of interesting development happening with Rust.


Yeah, i have recently finished a tokio based server. Working with future combinators is very frustrating. I accidentally captured a variable in a closure(should be cloned and moved), and it didn't tell me where it happened, just an error saying requires 'static lifetime for the variable.


The new async/await stuff will help a lot with this; it'll enable borrowing across futures, which will remove this requirement and make things a lot simpler.


Are there any plans to make them happen this year?


Not just this year, but by the third quarter.


I know this won't help you, but there is an issue tracking this: https://github.com/rust-lang/rust/issues/43353


The last tokio re-ergo really helped me and my classmate.. now we are getting stuff done, and we are enjoying it a lot, since we are combining our connections as instances of async state-machines (which, by the way, a given state may be a sub state-machine).

This is so awesome.


I recently wrote a few projects in Rust (C/C++/Go/JavaScript/Java/Python as background), and very much like the language. My 2 cents from my endeavors with Rust

I felt like all type errors are backwards. That is, "got" was the target you are giving your type to, not the type that you are passing. This may only happen in some cases, but I just started tuning the content of those errors out and instead adjusted randomly until things worked or the message changed.

I was often getting obscure type errors that were not at all related to the issue, and sometimes the compiler just insisted that just one more burrow would do, no matter how many burrows you stack on. This is definitely because I did stupid things, but the compiler messages were only making matters worse.

String vs. str is a pain in the arse. My code was littered with .as_str() and .to_string(). I never had the right one.

Enums are super nice, but it's very annoying that you cannot just use the value as a type. My project had a lot of enums (user-provided query trees), and it was causing a lot of friction.

There are also many trivial tasks where you think "Of course there is an established ecosystem of libraries and frameworks for this", and end up proven wrong. I mostly did find one library for the thing I needed, but often immature. The HTTP server + DB game seems especially poor.

In the end, I had to quit the fun and get work done (and others did not find playing with new tools as fun as I did), so I ported the project to Go and got productive. I took a fraction of the time to write in Go, the libraries are just so much more mature, it performs significantly better than the Rust implementation (probably because of better libraries—definitely not stating that Go is faster than Rust here), compile takes 1 (one) second rather than minnutes, and there is in general just much less friction.

On the flipside, it takes about 2-3 times as much Go than Rust to do the same task, even if it was way easier to write the Go code. The code is also a lot uglier. As an especially bad case, something that was a serde macro call in the Rust version is 150 lines of manually walking maps in the Go version.


> My code was littered with .as_str() and .to_string().

PSA: If you have a variable that's a String, you can easily pass it to anything that expects a &str just by taking a reference to it:

    fn i_take_a_str(x: &str) {}
    let i_am_a_string = "foo".to_string();
    i_take_a_str(&i_am_a_string);
Every variable of type &str is just a reference to a string whose memory lives somewhere else. In the case of string literals, that memory is in the static data section of your binary. In this case, that memory is just in the original allocation of your String.


Ah, I found that out later but had forgotten all about it. :)

I don't remember how I found out, but it seemed oddly magical until I just now read the docs: String implements Deref<Target=str>. Makes more sense now.

I still had a bunch of to_strings()'s, though, as things tended to take String whenever I had &str's. I found this to be a very unexpected nuisance.

EDIT: Maybe I needed as_str() as the & trick doesn't work if the target type cannot be inferred as &str?


FWIW, this is why we go over this stuff in the book now; lots of people struggle with it, it's not just you.

And yeah, Deref doesn't kick in everywhere, so you may need the .as_str() in those situations. It should be the extreme minority case, generally. Same with .to_string(), though moreso. Most stuff should take &str, not String.


It's relatively rare that APIs should be taking ownership of `String`s from you; the majority of the time arguments should be borrowed. I'm curious what cases you ran into most frequently that required `String`.


as things tended to take String whenever I had &str's

Functions should prefer &str or perhaps T where T: AsRef<str>. Note that if you write code that needs an owned String, you could consider taking some T where T: Into<String>, because this allows you to take many kinds of string types, such as &str, String, Box<str>, and Cow<'a, str>.


I don't remember the details, but I just recall that I ended up with a converting nightmare.

Your suggestions make sense, but I can't help but think that there is something fundamentally weird about basically having to use generic programming just to take a string arg. The most sensible thing would be "everything uses &str".


Your suggestions make sense, but I can't help but think that there is something fundamentally weird about basically having to use generic programming just to take a string arg.

Indeed, it's the hard trade-off to make every time need a string reference, do you want a function that is slightly more general or one that has an easy-to-read signature?

It becomes even more interesting when defining trait methods. E.g.:

    trait Hello {
      fn hello<S>(who: S) where S: AsRef<str>;
    }
Has the benefit of being more general, but cannot be used as a trait object (dynamic dispatch), since it wouldn't be possible to generate a vtable for &Hello.

I generally prefer the generic approach for functions/methods that want to own a String, since using some_str.to_owned() to pass something to a method is not very ergonomic (relative compared to &owned_string for a function that takes &str). But you have to be certain that you don't want to use the trait as a trait object.


It's just a tradeoff, like any other. If you use a generic, you add complexity, but can accept a wider set of types. If you don't, things are simpler, but you accept a smaller set of types. I personally find the AsRef to almost always be overkill. YMMV.


> Enums are super nice, but it's very annoying that you cannot just use the value as a type. My project had a lot of enums (user-provided query trees), and it was causing a lot of friction.

Note that OCaml has this feature of using-a-enum-value-as-a-type (or using a subset of enum values as a type, etc.). It works very well, but it quickly produces impossibly complicated error messages.

I'd like Rust to have this feature, eventually, but not before there is a good story for error messages.


I am not very familiar with OCaml, but why would such a feature result in complicated error messages? I can only really imagine two new error scenarios: "Expected Enum::ValueA, got Enum" and "Expected Enum::ValueA, got Enum::ValueB".


I’m interested in this feature because of the ”incremental” static case analysis it enables. What’s the problem with the error messages?


Oh, and added thing that bugged me a lot: The error part of Result<T, E>. During my short time of coding Rust (I'll get back to it later), I never really found a way to ergonomically handle errors.

I find it really awkward that the error is a concrete type, making it so that you must convert all errors to "rethrow". Go's error interface, and even exception inheritance seems to have lower friction than this.


I'm quite fond of error-chain (https://github.com/rust-lang-nursery/error-chain), which helps mitigate it somewhat. You can do things like:

    use error::{Error, ErrorKind, Result, ResultExt};
    
    fn some_func(v: &str) -> Result<u32> {
        v.parse::<u32>().chain_err(|| ErrorKind::ParseIntError)
    }
The purpose of `chain_err` here is to add on top of the previous error, to explain what you were trying to do, instead of passing up the previous error (in this case, `std::num::ParseIntError`).

If you don't like that, you can do something like this:

    use std::boxed::Box;
    use std::error::Error;

    fn some_func(v: &str) -> Result<u32, Box<Error>> {
        v.parse::<u32>().map_err(|e| Box::new(e))
    }
But then you'd have to box every error.


? should help a lot with this, but libraries like error-chain, or the newer (and, IMO much better) failure can help even more.


But ? only worked if the function it was used in had the same error type as the function you called, which was mostly not the case...

... I think. I don't remember it that vividly.


It doesn't have to be the same type, there just has to be a suitable implementation of the 'From' trait to perform the conversion if the types don't match.


? calls a conversion function. It only won't work if the error you're trying to return won't convert to the error that's in the type signature. Even then, you have options, like .map_err.


We have gone a bit down a tangent here. My original complaint was about the friction of different error types compared to other languages (like, say, Go).

Having to implement a bunch of From traits (unless you need io::Error, because everything seemed to have conversions from/to that), or having to implement inline error conversion through map_err, is such friction. I might go as far as consider it the most cumbersome error system I have used. Clean idea, cumbersome implementation.

My comment about '?' not working with mismatched types was mostly just to say that it doesn't fix anything, it just adds a bit of convenient syntactic sugar.


There is zero friction if you use either error_chain/failure crate and you can still recover the precise underlying errors. The only thing is it allocates.


I will look into those crates next time!


Yeah - I mean I'm only learning it as a kind of hobby at the moment but, working as a Java programmer, I find it incredible how helpful the compiler can be. It even gives you little helpful syntax hints and stuff.


Writing Rust code isn’t that hard, sure. The annoyences start when you try modifying code or moving things around.

Prototyping and editing code makes for most of my work, and Rust makes that a chore. That’s my main gripe with the language. It’s more like moving through molasses than encountering a brick wall.


It's interesting how perspectives differ; I love refactoring Rust code more than any language I've ever used, as it catches so many of my errors when doing so for me, at compile time.


I love Rust when it's refactoring time, the compiler essentially spits out a checklist that you just need to work through. And once it's done complaining it feels pretty confidence inspiring.

But I'll agree that Rust is unpleasant for prototyping. What I find myself doing a lot when starting out a project is just figuring out if some snippet of code will work. There's no REPL to just run it in. Then I have to either set up a scaffolding project just to run it, or just shove it somewhere along the working path and move it to it's real spot later. Except sometimes that messes up the borrow, or the signature and I have to decide between temporarily altering my working code to accommodate this small test or writing mode code without testing to reach the next test point.

And after everything works with your scaffolds and shims, you have to rip it all out and put your snippet where you wanted it in the first place. And then fill in all the gaps that prevented you from testing that snippet where it is in the first; hopefully it works, otherwise you're backtracking and rebuilding scaffolds you just ripped out.

There are times I just want to write a function and not declare return type, and not have the compiler complain about non-exhaustive matching cause there's only one usage and it's output is going straight into a `println!("{:?}", thingy)` anyway.

I guess the C++ equivalent is: I know when the code reaches this point it'll segfault and blow up, but I don't care because if it made it that far that means the thing I'm prototyping ran and gave me some feedback that I could act on. Rust just forces you to write everything instead of just things up til the prototype point.


Do you know about unimplemented!()? I use #[test] functions for what I'd throw into a REPL and if it's not horrible you can keep them as actual tests later.


That's really nifty! Not quite was I was thinking of though, but it gave me a starting point for some research before I came across https://github.com/rust-lang/rfcs/issues/1911 which describes why unimplemented!() doesn't quite work and what I'd like instead.


Ever tried changing an owned field in a struct to a borrowed one with lifetimes? Be prepared to change not only the struct and its fields but also everything else where it appears. Yes, the compiler will catch your mistakes, but no, it's not something I love.


I think the Rust devs like C++'s abstraction level. Rust is about the same (more modern), it just catches your errors. I don't think it's possible to make Rust as ergonomic as Python, but I do think it's possible to move a little in that direction without losing Rust's strengths.


Actually that is the goal of the other languages, Swift, Haskell, OCaml, ParaSail...

Influenced by Rust's success bringing affine types into mainstream.

To keep using automatic memory management as their default way of managing memory, while offering some escape hatches based on affine types for low level optimizations.


Indeed, this is painful. Plus that lifetimes tend to percolate through the codebase. This is logical and necessary, if a struct is bound to a lifetime, then another struct holding this struct is also bound to that lifetime. It can be painful regardless ;).


I hear you. It’s an important thing though; it will get slightly easier soon...


Were you a C/C++ programmer before? Your starting perspective matters a lot.


> All it took was learning how and when to borrow.

That's disingenuous. It's really not as simple as you're trying to make it sound. But I don't understand the point of this guy's blog post, unless it's just to whine. There's so little real content in his post.


> As someone who has been programming in Rust for nearly a year, even for commercial purposes, this article is baffling to me.

The author introduces himself as having a Python background, and as someone who experienced some challenges wrapping his mind around SQL. I wouldn't expect anything different.


I write hardware drivers (mixed kernel/user-mode) for 100Gb/s FPGA network accelerators at work in C and C++ for 3 different OS's. I'd argue that I am very much the target audience for Rust.

And I also find that SQL can be quite infuriating to deal with, and had a high-friction experience with Rust.


Having just moved to SF from Madison, I have to say that the engineering competence of teams there is disappointing in comparison to the Bay Area. Not only that, but Madison companies really do not pay well at all. You can't expect to keep talent if you aren't paying for it.


So far it’s working out and we’re excited about growing here. Our data says we’re competitive on compensation and I’m proud of our retention. I’m very happy about the level of our engineering talent and this team is building a successful product used around the world right here in Madison.

I’m sorry you didn’t have a great experience with your last team.


I'm glad your happy with your company's engineering output.

As an engineer, I would head for Chicago or Minneapolis. Madison doesn't have "the next job". When that time comes, I wouldn't want to uproot my family.

I'm glad your rention is stellar. I would expect increased retention due to lack of alternatives.

Most companies believe their compensation is competitive. A lot of them are wrong.

The Midwest might be a place where companies can skirt those two things fairly easy.


Is there any company out there who actually says “our compensation is NOT competitive?” That’s pretty much the lowest bar. When I hear “competitive salary” it means the company can’t think of a more positive word to describe it that is also truthful.


I'm really not sure what your point is. If you're losing candidates based on compensation being too low, then you're not competitive.


Everyone at least says their compensation is competitive—-it’s not a way for an employer to differentiate. As a candidate, when I see the word “competitive” that tells me it’s lower than “generous,” “excellent,” “top of market,” basically lower than any other description of compensation out there.


Maybe not but there are plenty of companies who don’t say their company is competitive.


Yes, "Next job opportunity" is the biggest stumbling block in the midwest. It's workable, but more difficult when you become more specialized.

I've always assumed this is the same everywhere, but I've never received a competitive raise outside of my first year with a company. If you come in and make some big adjustments that save money, you might (maybe) get a bump, but after that they take it for granted. I've only ever kept up with inflation (especially health care inflation), by changing employers, so "next job" is a huge consideration.


You certainly would be entitled to make that decision, and many do! The point of my original comment was that we have sufficient talent to build and scale tech companies here. It's certainly not going to attract every talented worker (like yourself), but not even the bay area has that going for it anymore.

For entrepreneurs thinking of building in the Midwest, you absolutely can. Let's not let that point get lost in a discussion about personal values.


> not even the bay area has that going for it anymore.

Are there definitive stats on this because I would disagree especially compared to the Midwest?


My point was there are enough talented people who don't want to work in the bay area that other cities have enough to grow tech companies. This is kind of obvious.


That’s always been the case. The issue is quality. Outside of the valley, top caliber developers tend to congregate in big hubs like NYC or LA; or interesting places like Portland / Seattle, Austin, or Denver. The Midwest tends to be for locals only due to a combination of less agressive investors (not to mention a lack of), bad climate, and lack of stuff to do compared to everywhere else

Chicago would be an exception but crime would keep people away

Can any of this change? Of course, but it’s really hard to change culture and climate takes time (ex even after all these years CA still had a gold rush mindset)


Chicago didn't seem to have that great a real crime problem (as opposed to perception) when I lived there.

There probably would not be much to do in Nebraska, but then I can't see Bay Area as being exactly an epicenter of culture either.


Chicago is typically in the top 10 in the country in terms of crime according to the numbers / population

> Bay Area as being exactly an epicenter of culture either

It depends on what you want. The SF Bay Area is not NYC in terms of fashion or theater. If you want that, you go to NYC. If you want a good clubbing scene, it's either LA or NYC. However it has a bit of everything. If you like the city, there's SF and Oakland. There's also a lot of suburbs and quaint downtowns like Palo Alto if you like that. There's always something going on regardless of whether you're in the city or out. The food scene is good. Wine country is next door. Tahoe isn't too far, and there's a lot of places to hike nearby. There's also fishing and sailing on the Pacific, or canoing in the rivers. Yosemite is doable. Also the weather is almost always temperate year round, hovering around the 60s-70's. These are a few reasons as to why housing is so expensive here. I didn't even go over the professional reasons.

Comparing the Bay Area to Nebraska is disingenuous. There's no equal in the MidWest, though Chicago would be the closest.

The US MidWest is just a really hard sell to almost everyone except to the people who grew up there. There's just too many other alternatives in the US. (I used to be a consultant going here and there)


Crime in Chicago is very much concentrated. In areas where your average HN reader would live, I very much doubt that your chance of being a crime victim would be any higher than in SFBA. And in most suburbs (from which you can get to Chicago much easier than to SF from SV) it would be significantly less; reading NextDoor here is downright scary.

Certainly, California has more diverse nature than most of the Midwest does. But culturally... as far as theaters, museums, music etc. goes, SF is behind not only New York, but Chicago as well. Food scene, once you pass French Laundry and couple more places in Napa isn't Chicago either, just more expensive. I've left far less money at Alinea than at Atelier Crenn, and while of course this is highly subjective, there really was no comparison between the two.

Weather is nice, but the second year it became incredibly boring; I like my seasons. But then, I've moved here for personal, not professional reasons. And professionally, while I was making quite a bit less money in Chicago, I could live in a hot area, 1 minute from subway, 15 minutes walk to work, 10 minutes walk to NEXT or Moto (yes, I like my food) and somehow had far more money left to squirrel away than I do living in some cheesy apartment complex in a middle of San Jose nowhere.

Certainly, if you get into a startup that actually makes it, gets acquired, or something like that, you have a better chance of striking it rich than in Chicago. But as far as I know, most people don't. And it's not that work here was that much more interesting -- doing yet another JS framework du jour, or yet another noSQL database (because rewriting MUMPS, just not quite as good, is always fun) isn't that exciting once you get old enough and stop jumping at every new and shiny thing. I'd much rather do something interesting at Wolfram (well, if Chambana weren't just as boring as SFBA) than doing yet another chat app in SV...


> once you pass French Laundry and couple more places in Napa isn't Chicago either, just more expensive

We can just agree to disagree on this one. The SF food scene alone is pretty vibrant and constantly evolving. However this isn't just a Chicago vs SF or Napa. It's hard for Chicago to compete against the entire SF Bay Area in terms of food. Also, you may be confusing the cost of Bay Area food with NYC.

> Weather is nice, but the second year it became incredibly boring

I'm not sure if you're being sarcastic, but most people don't like dealing with nasty slush and ice for months on end in their daily lives. Moreover, if one climate in the Bay Area gets 'boring', there are plenty of micro-climates. Also, as I've mentioned before, it's pretty easy to get to Tahoe or go to the desert if you want something really different every weekend.

> Certainly, if you get into a startup that actually makes it, gets acquired, or something like that, you have a better chance of striking it rich than in Chicago.

Ignoring the SV lottery, the Bay Area is just a better place professionally for techies than the Chi Metro.

> But then, I've moved here for personal, not professional reasons.

I don't feel that you're disproving my point that mostly locals like the MidWest better than elsewhere. Besides my main point isn't just that the SF Bay Area is more appealing than the MidWest. If I wasn't very clear, my main point was that there are many other metros in the US that are more appealing overall than the MidWest at large. Of course, different people like different things so not everyone will agree with me.

EDIT: > Crime in Chicago is very much concentrated.

Unless the data is wrong or I'm misinterpreting it, crime in Chicago doesn't look concentrated like it is in most places or in the Bay Area . It looks pretty well distributed.


I am not sure if the Bay Area is that (or any) better, food-wise, than Chicago, and I've tried most of the well-rated places in both... Matter of taste, of course.

For the difference in cost of living, I probably could fly myself from Chicago to Tahoe every week and still come out ahead. And to each his own, but I like looking in the window and knowing what season it is.

I'm not really a midwestern local, but of all the places I lived or visited in the US, I would certainly pick Chicago well ahead of Bay Area. It might be more "interesting" professionally (although I'd rather work on something actually useful, not the next Juicero or Theranos), but having some money left is pretty sweet, too.


> although I'd rather work on something actually useful, not the next Juicero or Theranos

Bay Area companies have made a lot of what we know of modern life in the 21st century possible. It's not just limited to IT either. This is the birthplace of biotech. It's a lot harder to take your comments seriously when this is what you're writing; it also shows that you're not familar with the professional side of the Bay Area. There are just a lot of companies as well as a big variety of them that give your professional life a lot more flexibilty. The concentration of companies also allow for more serendity i.e. it's not uncommon for people at Googles or FB to just meet by chance and end up working on something together. People are less risk averse and more open to new ideas. I can go on. Does this lead to ridiculous things? Of course. Mistakes are inevitable. At the same time, it's also how major breakthroughs are made.

There's a reason why a lot of things start here and not elsewhere. That said it's not totally exclusive to the valley; it's just no longer in the Midwest. Of course I could be missing something, and I've been totally wrong before and I could be wrong now or in a few years.

> For the difference in cost of living, I probably could fly myself from Chicago to Tahoe every week and still come out ahead

For SF, maybe; but the Bay Area is more than just SF. Outside of SF, Chicago is only about 20% - 30% cheaper than many other parts of the Bay Area metro.


SV does tend to exaggerate its importance. Part of the brand marketing, I guess. I also fail to see how Google and FB meeting to come up with new ways of stealing your private data is a good thing.

Seriously, sure, there are breakthroughs made in SV, too. But if you adjust for the signal/noise ratio with all the absolutely useless things that SV comes up with (and that's the majority of them -- exactly because it is only in SV that you can get financing and sometimes even sell for billions stuff like the aforementioned Juicero) you could find that cornfields of Illinois are just as innovative. They just have to come up with something that, you know, is useful.

As for prices... for anecdotal reference, I am paying about $1000 more a month for a two-bedroom on the outskirts of San Jose, in a crappy apartment cardboard apartment complex with no walking accessibility to anything, no public transport, and nothing to do than I did for a place in a Chicago midrise, with stuff like elevators and garages, 2 minutes from subway, 10 minutes walk from some of the best restaurants in the country, walking distance to downtown, real soundproofing etc. etc.

Sure, I guess Gilroy might compare with Chicago prices slightly better (but then, houses there are at least 2-3 times as expensive as a comparable Chicago suburb; and Chicago wouldn't stink of garlic, either). And a place 3 hours away from anything... why would I want to live there in the first place?


> I also fail to see how Google and FB meeting to come up with new ways of stealing your private data is a good thing.

It doesn't literally have to be just Google & FB. There are plenty of other companies here such as MS, Amazon, NVidia, Intel, AMD and so on. Having a large number of programmers, engineers, and scientists in one location tends to produce a lot of breakthroughs and innovation, similar to what happened with the glass makers in renaissance venice.

> SV does tend to exaggerate its importance. Part of the brand marketing,

Yes, I guess the place where the first commercially viable microprocessor and smart phone were created isn't very important. Most of the work of the foundation of what later became the internet was done here as well. There's a really long list of acheivements that overshadow the bread and circus that's inevitably forgotten.

> As for prices... for anecdotal reference, I am paying about $1000 more a month for a two-bedroom on the outskirts of San Jose

Again, I feel that you're missing (or continuing to ignore) my main point. Despite its problems, Chicago is probably still a great city. The problem is that it has a lot more competition (not including the SF Bay Area) in the 21st century.


These seem like related complaints: low pay doesn't attract top talent. Most of the people looking for higher pay would likely go to Minneapolis or Chicago.


when you factor in the cost of housing in the bay area, I'd be willing to bet anywhere outside the bay area will compensate you much better.

This might not matter if you're a 20 year old deciding between renting a 8' room for 1500$ or 200$. But it most certainly will matter once your in your thirties with a family comparing buying the 3 million dollar crap whole in palo alto vs a nice decent house at 300K (in madison).

I think the best way to go is spend 5 years working in the bay area, save as much as you can. And then get OUT while you still can before you set down roots and meet someone.


Holy shit, that first one is so annoying. I don't understand how even the shoddiest of voice assistants on the market get it right but Alexa doesn't.


The thing is, ALL other voice assistants I have tested get this right. I have submitted three reports to Alexa support about this very thing and they don't care. It's ridiculous that doing something that is so common would be so ingratiating.


I've worked on a number of SPA's over the years and I always run into the problem of getting my team to give a shit about performance. "We're not Google," they protest. Frankly for a lot of people it's the trade-off of slow client performance for development velocity. It's much easier to just throw on another state for new functionality than it is to consider what parts of the page can be static and how they can be rendered.


Surely if you can't produce an SPA with equivalent or better performance than a more traditional architecture then - don't build an SPA.

Or even better use a simpler solution that gives me 80% of the benefits of an SPA: Turbolinks, PJAX, intercooler.js or even a light sprinkling of good old AJAX.

Does anyone remember "progressive enhancement"?


Perhaps the most simple and trasparent solution to seamless navigation: http://instantclick.io/

As long as your backend is fast enough, it feels like navigating a SPA.


See, "we're not Google" should actually mean that you care about performance more. Google can throw millions of dollars worth of infrastructure into making an app go marginally faster. Whereas, all you have is brainpower before you deploy or ship. And it doesn't take that much more brainpower to get big increases in client performance when you're starting from not-optimized-at-all. The marginal payoff is larger and the marginal costs are much smaller.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: