Physical quantum computers have noise. Let's take a (simplified) scenario. You've set up your quantum circuit with one qubit, and put it in a position where it will measure as either 0 or 1 with equal chance. In the simulation it'll come out as 0 or 1 with actual equal chance. In the real world, other factors will create a bias one way or the other (this may not be consistent, either) so that it comes out more like 60% 0 to 40% 1, even over 1000s of trials.
If you set up a circuit where you've entangled two qubits so that they should come out as the same value (00 or 11) and the configuration says they should come out with 50% chance of either, the simulation will show that. The outputs of 01 and 10 will never show up in the simulation. But in the real world, there's still a chance that you get those. You'll likely (on IBM's quantum computers) get something like 1-5% 01, 1-5% 10, 45-50% 11, 45-50% 00 (again, over thousands of runs).
If you want to see how this plays out with simulations and real quantum computers, IBM [0] has free access (constrained by credits when you want to run on real quantum computers, they reset each day).
As far as I understand, it's because what it's simulating is a logical qubit which is different from the very noisy, almost instantaneously collapsing physical qubits present in current quantum computers.
Software simulates what's supposed to happen while the hardware only approaches it through many repeated trials.
In principle yes, but I tried in 4 (edit: 5) browsers and couldn't reproduce the problem, and it's late here and I'm running out of steam. So you may have to suffer overnight. Sorry about that.
Edit: ok, thanks to https://en.wikipedia.org/wiki/Unicode_character_property#Whi... I put in a bunch of ​. I hope that fixes it because I need to sign off now. If it isn't fixed, it would be helpful if someone would email hn@ycombinator.com so we see that in the morning. Short term memory does not survive the firehose.
Happened for me on iOS 13 if it helps. Font stays the normal size but you have to scroll right. If you zoom out, it’s a bit harder to read but not bad.
HN should add the CSS rule "word-wrap:break-word". In fact this rule should almost always be the default for text that might contain excessively long words (such as user-submitted text on forums).
The important part is that the algorithm is applied by the system without human interaction after the initial input of preferences. Placements are still final.
I'm confused about the comparison between cost per job and cost per job year. Presumably this (the Wisconsin deal) would be the better deal as the jobs presumably last more than one year?
Your first sentence suggests that you disagreed with my previous post, but your rationale very much supports my point. Spending 125-250K on a job that only exists for a year is markedly worse than paying a comparable amount for a job that lasts many years.
> We accounted for patient characteristics, physician characteristics, and hospital fixed effects. Patient characteristics included patient age in 5-year increments (the oldest group was categorized as ≥95 years), sex, race/ethnicity (non-Hispanic white, non-Hispanic black, Hispanic, and other), primary diagnosis (Medicare Severity Diagnosis Related Group), 27 coexisting conditions (determined using the Elixhauser comorbidity index28), median annual household income estimated from residential zip codes (in deciles), an indicator variable for Medicaid coverage, and indicator variables for year. Physician characteristics included physician age in 5-year increments (the oldest group was categorized as ≥70 years), indicator variables for the medical schools from which the physicians graduated, and type of medical training (ie, allopathic vs osteopathic29 training).
They don't say _how_ they controlled for those characteristics though. Presumably, they divided each patient's calculated 30-day risk of death [1, table 10] by the relative ratios in risk of death of every category, calculated from the same data set. That should probably cut it for controlling for this particular difference, although I am not a statistician.
It's a regression model, so "accounted for" usually means that the factors were included in the models.
Ideally, we'd ask a group of male doctors and a group of female doctors to independently diagnose and treat the same set of patients. We can't actually do that--in addition to the cost, you obviously can't treat the same patient twice. However, we can try to estimate this effect statistically.
First, they build a model describing the probability of a patient dying within 30 days. They used a linear probability model, which essentially means that the probability of someone dying is the sum of the "weights" related to the patient, doctor, and hospital. These weights are estimated from the data using ordinary least squares (the same way you may have learned to fit a line to points at school).
Having built the model, they then ask what the marginal effect of the doctor's sex is. In other words, if you hold everything but that constant, how does the probability of dying change? They cite a nice Stata guide (#32 in the paper here: http://www.stata-journal.com/sjpdf.html?articlenum=st0260 ) which gives some background and examples.
The rest of the paper looks at different variations (only hospitalists, different diseases, etc), using a pretty similar approach.