Hacker Newsnew | past | comments | ask | show | jobs | submit | Vvector's commentslogin

Paper ballots are a must. Vote on a touchscreen, then have the terminal print out a voter-verifiable paper ballot that can also be machine counted.

Make the ballot printout layout a standard format. Then machines from multiple vendors can verify the counts on a subset of the ballots. And as a last resort, the ballots can be hand counted as well.


This is an eVoting system - not at a ballot box. There is no printer. And even if there was, a similar problem can occur if you lose the keys. And you need keys because the printout cannot be voter verifiable, or you enable the various forms of vote fraud that anonymous ballot boxes were introduced to stop.


Leap Seconds need to be abolished. The only people who need it are Astronomers. They could just use an offset. Implementing leap seconds correctly is a huge burden, for no gain.

Where I live, high noon today occurs at 1:03 PM. No one is complaining that it is 3 minutes (or 63 minutes) off. It's a non-issue for 99.9% of the population.


Hmm. I understand that perspective, but I'm not sure I agree. It does seem to matter over a relatively short & realistic time scale. According to the Wikipedia page, there have been 27 seconds added since 1972, which is only 44 years ago. At that rate, that's about 1 minute per 100 years. We have many systems that have existed for several centuries and I think it's not unreasonable to start making plans for systems that may exist for millennia, where you're starting to talk about a 10+ minute offset at the current rate.

But I do think there is a valid argument that the infrequency of these events cause more issues than maybe one large adjustment 500 years from now would cause. Not sure where I land on this one.


> since 1972, which is only 44 years ago

Thanks for making me a decade younger :)


Can you explain how a 10 minute offset would affect you in any way?

For 99% of the world today, high noon =/= 12:00:00. Nothing breaks because of this. The world continues to run.


I ran into this trying to view the meridian line at the Basilica of Saint Mary of the Angels in Rome.

I was told the sun would show up on the calendar in the floor at noon. As noon approached, I saw nothing. Then I figured it probably needed to be solar noon, so had to look that up and wait around until that time. Today, that will be 12:20pm.

Nothing would have broken had I missed this, and nothing of critical importance is running on a solar clock (I don’t think), but it still led to a discrepancy in what was expected and where I needed to be when, based on drift from solar noon.


The problem is that Earth's rotation isn't consistently faster. Some years leap seconds need to be added, some years they need to be removed. Would be far better to leave them alone, let them average out, and as the GP said let the people who care about this add the offset they need.


> Some years leap seconds need to be added, some years they need to be removed.

Is that true? Per Wikipedia:

> Since [1972], 27 leap seconds have been added to UTC, with the most recent occurring on December 31, 2016. All have so far been positive leap seconds, adding a second to a UTC day; while a negative leap second is theoretically possible, it has not yet occurred.

Either way, it's due in part to Earth's rotation slowing down, so the average drift would still be non-zero.


We've not had to apply negative leap seconds yet since leap seconds were introduced in 1972, but that wasn't the point.

The time period of the Earth fluctuates a lot [0] and actually in 2020 it was less than 24 hours, but not a large enough change to warrant a negative leap second. If you go back to the 1940s, we would had needed negative leap seconds if we had leap seconds at all then, and going back 150 years we would have needed multiple negative leap seconds every year for several consecutive years.

What we can say is that on average, it is close enough to 24 hours and the average over hundreds of years is even closer to 24 hours that it's not worth adding these extra seconds as you'd then need to remove them again later on.

[0] https://c.tadst.com/gfx/900x506/graphlength-of-day.png from https://www.timeanddate.com/time/negative-leap-second.html


You make a good argument for the opposite of your conclusion. If you’re planning a system that’s supposed to last for millennia, that system shouldn’t depend on the fiat of the International Earth Rotation and Reference Systems Service.


Let's just do leap minutes. If humanity survives long enough to witness a leap minute without destroying ourselves then that's ample compensation for the minor inconvenience.


Astronomers do not need leap seconds, because even with this adjustment UTC cannot be used to determine anything in astronomy.

Astronomers need either true time, which is TAI, to be used in computing the positions of celestial bodies, and they need for observations the so-called Sidereal Time, which is not a time but the angle between a coordinate system attached to the Earth and an inertial system of coordinates attached to distant celestial objects that have negligible angular movement (in the past those were distant stars, now they are distant galaxies or quasars).

The Sidereal Time can be computed in a complex way from TAI, because it is determined by the periodic rotation and precession of the Earth and by various superposed periodic or random movements.

The UTC is not adjusted to match the current true rotation angle of the Earth, which you can measure by looking up to the stars, but it is adjusted to match within 1 second a fictitious angle that would be the rotation angle of the Earth-Sun direction corresponding to an Earth that would rotate uniformly both around itself and around the Sun, so that the duration of a day would have been constant.

In reality, the duration of a Solar day, i.e. the time between 2 consecutive noons, varies a lot during the year, by a large fraction of an hour (by about a half of hour peak-to-peak), so using UTC directly for estimating the position of the Sun gives a very big error, of many minutes of hour.

So what you need for astronomy is to know the current TAI and you need a Sidereal Time calculator, which you need for knowing in what direction to point your telescope, to find a given celestial object.

UTC cannot be used directly in astronomy, but only after passing either explicitly or implicitly through TAI. The fact that astronomical almanacs are published using UTC in their tables is obfuscating this, because the values in the tables have not been computed using UTC, but everything has been converted to UTC to match the time that is presumably shown by the watch or clock that the almanac user may have.


Unless you want to abolish timezones entirely, which would simplify clocks but complicate a whole lot else in society, you're going to need leap-something. Would leap minutes or hours really be much better? The idea that doing things less often causes more problems is a reasonable one.


In the 56 years since UTC was established, there have only been 37 leap seconds. At that rate it would take more than 5400 years before it would affect solar noon more than DST does. I'm more than okay kicking the can that far down the road in the name of avoiding all the ridiculous solutions that are needed to accommodate leap seconds. We've endured these headaches to potentially solve a problem for people who might not even still be using UTC.

Compare that to removing the leap day, where the start of seasons would be noticeably affected within just a few decades. Hundreds of years ago, a pretty insignificant headache was invented which is providing constant payoffs.


We need to do "leap hours" anyway--just today they changed to daylight saving time in the U.S.! And time zones are also adjusted every now and then, which also amounts to a one-hour change in the affected regions. Even if we didn't have continous practice with leap seconds, I think we could definitely include an extra one-hour shift for earth rotation reasons along with all the other ones.


They already got to that conclusion, but because of computers.

https://rin.org.uk/news/624222/Leap-Seconds-To-Be-Phased-Out...


That’s super interesting—I didn’t know that.

Before modern standardization, maintaining calendars and clocks was typically the responsibility of states or similar authorities, often guided by astronomers. Now it seems that international organizations are effectively following the early UNIX/POSIX model, and astronomers no longer have the same authority over timekeeping.


> The only people who need it are Astronomers.

And anyone that cares about the relationship of the time of day and the position of the Sun.

Granted, it's not a lot, only a minute per century.


Which means that time changes slowly enough that we don't notice. At some point everybody goes work half an hour earlier because it makes sense. Schools start earlier, shops open earlier. It doesn't have to be coordinated worldwide. Every region or even town can have its own customs. Then people notice they are in the wrong time zone and a country moves to a different time zone.

Statistically, nobody on Earth knows what UTC is. People know about their local time zone and how it related to time zones in other countries. Where the position of the sun is relative to UTC, almost nobody knows.


Yet high noon at my current location comes at 12:03 (1:03 with DST). It's three minutes off. If I lived further west in my timezone, noon would come much later.

How can people manage with noon off by minutes, yet want leap-second accuracy every 6 months?


More like every 2 years.

But yes, point taken.

The counterpoint is that it costs little.


> The counterpoint is that it costs little.

Radical changes to time-related software cost little ? Stop press !


What radical software changes are you doing?

I haven't once done a single software upgrade related to leap seconds.


I think the best solution for minimising overall _long-term_ hassle is to switch to using TAI internally and UTC for display.


I mean, in 53 years we have added 27 leap seconds, so in 119 years you'll have to set your alarm a minute earlier if you still want to arrive on time.



Are you suggesting that I cannot refuse to hire a bookkeeper that has multiple convictions for embezzlement?


s/cloudflare/coinbase/


Gah! Sorry, complete slip of the fingers. No offense to Cloudflare intended!


Fixed now!


Thank you dang!


One day while driving, I received a call from a technical recruiter at Stripe. I told them about how much I admired their developer first approach, the Atlas program for startups, etc. Later that day, I looked up the recruiter on LinkedIn and realized they worked at Square, not Stripe!


I do this all the time with Shopify / Spotify. The number of times non-tech friends have had to ask what Shopify is when discussing music and I slip up :/


I have the same problem with Oracle / Lawnmower.


That's odd


Is a pirated movie, found on bittorrent, public?

IMO, your definition is overbroad


If it's on bittorrent then, yes, it's public. It doesn't matter if you intended it to be or not, it's publicly accessible, therefore it's public.


Does an increased pixel count make a bad movie better?


Does a decreased pixel count make a good movie better?


If the local market for American DBAs is $180k, then hiring H1B DBAs at $110k does depress wages.


Sure, but if the local market is that high you probably have sever supply constraints.

If you don't fix the supply constraints, you'll depress growth.

You could fix the education system - good luck - and then wait 5 years before you cut H1B.

But yes, obviously it depressed wages, which at a certain point is probably a good thing.


If it is a "free tier", Amazon should halt the application when it exceeds quota. Moving the account to a paid tier and charging $100k is not the right thing to do.


Yes. They said it was free then they surprise charge you $100k.

That’s an insane amount of both money and stress. You’re at Amazon’s mercy if they will or will not refund it. And while this is in process you’re wondering if your entire financial future is ruined.


I have never in 8 years of being in the AWS ecosystem and reading forums and Reddits on the internet had anyone report that AWS wouldn’t refund their money.

If you go over your budget with AWS, what should AWS do automatically? Delete your objects from S3? Terminate your databases and EC2 instances? Besides, billing data collection doesn’t happen anywhere near realtime, consider it a fire hose of streaming data that is captured asynchronously.


> If you go over your budget with AWS, what should AWS do automatically? Delete your objects from S3? Terminate your databases and EC2 instances?

Why not simply take the service offline once it reaches the free tier limit??

The reason why is that AWS is greedy, and would rather force you to become a paid customer…


How do you take your S3 service offline when they charge for storage or your EBS volumes? Your databases?


Block access to the service until the next billing period starts, or the user upgrades to a paid tier.


And it is still incurring charges for storage costs.


At Amazon scale, including a "we don't delete the data for 30 days if a bill isn't paid" clause is a plausible thing to include in the "free" tier. Paid tiers owe Amazon the contracted rate for the storage, as with any similar contract, and when Amazon deletes the data if payment isn't rendered when due is up to the terms of the contract.


There is no such thing as the “free tier” at least until July of this year. Some services are free for the first year up to a certain limit, some give you a bucket of free usage every month, etc.


Then you owe the contracted rate for the storage. These massive bills are almost never for storage, they're almost always for some sort of compute or transport left unrestricted. If you store 500TB you'll get an $11k/month bill, but the vast majority of the services can simply cut off usage at a limit. Even storage could prevent adding new data if you hit a pre-specified limit, so you'd only pay for the data you already had.

If I know my service should never use more than 1TB total I'd like to be able to set a limit at (say) 2TB total with warnings at 0.6TB & 1TB, thus limiting spend to $46/month on storage. Sure, my service will fail if I hit the limit, but if it's using double the storage I expect it to use something went wrong & I want to require manual action to resolve it instead of allowing it to leak storage unbounded.

This is not a particularly difficult problem to make significant improvements on. There are some edge cases (there always are) but even if spending limits were only implemented for non-storage services it'd still be better for customers than the status quo.


Provide the user the tools to make these choices. Give the option to explicitly choose how durable to extreme traffic you want to be. Have the free tier default to "not very durable"


Bam, you said. They’d do it if they cared, but they don’t and prefer the status quo. 100k surprise bill is the type of thing people kill themselves over. Horrific


You mean like having a billing alert send an event that allows you to trigger custom actions to turn things off? That already exists. It has for years.


So why isn't it the default yet? Why isn't unlimited scaling something you have to turn on?


Because how you personally decide to handle cost overruns is up to you. AWS by itself can’t make that decision for you.

The opposite problem when you do set low limits by default is that you constantly have to submit tickets to AWS to ask for service limit increases.

How is AWS suppose to know whether you want to immediately scale or not?

And before July of this year, there was no such thing as a “free tier AWS account”. There were services that allowed certain amount free.


> How is AWS suppose to know whether you want to immediately scale or not?

Ask? This is not some impossible problem.

Yes, there is a UX challenge to be solved.

But also, doing so is well within the capabilities of a company like Amazon.

They simply have no incentive to help out since there is less money to be made by making it easier to spend less money. And, purely capitalistically, if you have to pick between a potential bug or misconfiguration that causes extra spending you can walk back with customer support, and a bug or misconfiguration that results in extra downtime for your 7+ figure customers, you pick the latter.

Their choice makes sense for their bottom line.

It's still bad UX for many users.


And AWS is suppose to do that across all 230+ services?

But as of July 15th of this year, there is actually a “free tier” that won’t let you spend over $200.

Before there were services with a free tier.


I agree, but I could also see how someone would complain about that: “Our e-commerce site was taken down by Amazon right on our biggest day of the year. They should have just moved us up to the next tier.”


Seems like the most flexible option is to put a spending limit in place by default and make it obvious that it can affect availability of the service if the limit is reached.

My credit cards have credit limits, so it makes sense that a variable cost service should easily be able to support a spending limit too.


Then let that be the non default option.


The default option is always going to be the one that makes the majority of Amazon's paying customers happy.


Maybe offer 'Sales day rush auto-scale' as a setting.


That would get caught during the pre-peak stress testing.

You do do stress testing ahead of peak season, right?


Good news! This is exactly how the free tier works now.


You're misunderstanding the offering. (Maybe that's their fault for using intentionally misleading language... but using that language in this way is pretty common nowadays, so this is important to understand.)

For a postpaid service with usage-based billing, there are no separate "free" and "paid" plans (= what you're clearly thinking of when you're saying "tiers" here.)

The "free tier" of these services, is a set of per-usage-SKU monthly usage credit bonuses, that are set up in such a way that if you are using reasonable "just testing" amounts of resources, your bill for the month will be credited down to $0.

And yes, this does mean that even when you're paying for some AWS services, you're still benefitting from the "free tier" for any service whose usage isn't exceeding those free-tier limits. That's why it's a [per-SKU usage] tier, rather than a "plan."

If you're familiar with electricity providers telling you that you're about to hit a "step-up rate" for your electricity usage for the month — that's exactly the same type of usage tier system. Except theirs goes [cheap usage] -> [expensive usage], whereas IaaS providers' tiers go [free usage] -> [costed usage].

> Amazon should halt the application when it exceeds quota.

There is no easy way to do this in a distributed system (which is why IaaS services don't even try; and why their billing dashboards are always these weird detached things that surface billing only in monthly statements and coarse-grained charts, with no visibility into the raw usage numbers.)

There's a lot of inherent complexity of converting "usage" into "billable usage." It involves not just muxing usage credit-spend together, but also classifying spend from each system into a SKU [where the appropriate bucket for the same usage can change over time]; and then a lot of lookups into various control-plane systems to figure out whether any bounded or continuous discounts and credits should be applied to each SKU.

And that means that this conversion process can't happen in the services themselves. It needs to be a separate process pushed out to some specific billing system.

Usually, this means that the services that generate billable usage are just asynchronously pushing out "usage-credit spend events" into something like a log or message queue; and then a billing system is, asynchronously, sucking these up and crunching through them to emit/checkpoint "SKU billing events" against an invoice object tied to a billing account.

Due to all of the extra steps involved in this pipeline, the cumulative usage that an IaaS knows about for a given billing account (i.e. can fire a webhook when one of those billing events hits an MQ topic) might be something like 5 minutes out-of-date of the actual incoming usage-credit-spend.

Which means that, by the time any "trigger" to shut down your application because it exceeded a "quota" went through, your application would have already spent 5 minutes more of credits.

And again, for a large, heavily-loaded application — the kind these services are designed around — that extra five minutes of usage could correspond to millions of dollars of extra spend.

Which is, obviously, unacceptable from a customer perspective. No customer would accept a "quota system" that says you're in a free plan, yet charges you, because you accrued an extra 5 minutes of usage beyond the free plan's limits before the quota could "kick in."

But nor would the IaaS itself just be willing to eat that bill for the actual underlying costs of serving that extra 5 minutes of traffic, because that traffic could very well have an underlying cost of "millions of dollars."

So instead they just say "no, we won't implement a data-plane billable-usage-quota feature; if you want it, you can either implement it yourself [since your L7 app can observe its usage 'live' much better than our infra can] or, more idiomatically to our infra, you can ensure that any development project is configured with appropriate sandboxing + other protections to never get into a situation where any resource could exceed its the free-tier-credited usage in the first place."


Oracle can do it.


Yes and no. Yes, if we're just specifically talking about the ability to support a free trial that will never bill you (i.e. what the OP was talking about); but no, if we're talking about the more-general ability to set spending limits and never be billed for overage (what this subthread drifted into discussing.)

Oracle Cloud has a 30-day free trial; and that free trial seems to have had some dedicated effort put into a whole divergent billing-infra path for it.

Under Oracle Cloud's free trial, you get a certain amount of spend ($300 in credits); and then, when your trial either expires (30 days) or you run that credit pool down to zero, your account is shut off.

Oracle do eat any marginal costs from your spend taking your credits "below zero" before they shut the account off, because your account was never billing to you anyway; it was billing to Oracle's marketing department as a lead-gen expense.

In other words, unlike Oracle Cloud's steady-state IaaS offering, their free-trial IaaS offering is actually a prepaid (but usage-billed) paradigm — with Oracle being the ones doing the pre-payment.

This works much like an oldschool prepaid phone plan, where you pay in every month to be given a certain number of [expiring/non-"rollover"] minutes/texts/MB of data; and then you get an itemized invoice at the end of the month for how close you came to "using up" each resource that month. And you very well can use up a resource's monthly paid allocation before the end of the month — e.g. "running out of texts" and being unable to send more, rather than those converting into something billed to you. (In a prepaid context, that "converting into being billed" is called "flex" or "pay-as-you-go" [PAYG] billing, and is usually some extra option you would have to enable, if offered at all.)

At scale, prepaid usage-billed systems are also asynchronous; to continue the telecom analogy, most phone-service providers won't re-aggregate your prepaid calling minutes to notice you've run out, until you hang up your current call. Only rarely do they have infra where the billing system can ping the telecom switches' control planes to say "hey, this guy just went over, hang up the call" — and when they do, they only do such checks on a 5-minute/30-minute interval, probably as a scheduled batch query.

But, yes, prepaid systems almost always do just eat any overage generated by this detection gap. This is usually safe, because prepaid systems are almost never elastic to the point that you could accrue nontrivial expenses during that short accounting gap.

When a system is that elastic, a systems architect responds by saying "this should be a postpaid system."

Which means that Oracle Cloud's free trial — insofar as it allows you to make use of truly-elastic resources with per-credit upstream basis costs, like FaaS compute — is probably vulnerable/exploitable. Oracle may sometimes be eating some hefty bills, where people on a free trial have wired their FaaS into a proxy fronting some already-highly-popular service.

This is mostly fine, if you have Oracle's treasury, because you'll still be doing KYC in advance of giving out these trials, so you'll only be letting any given individual do one trial.

But this does put Oracle in the territory of "having to think about people who buy burner identities on the black market [usually for ~$1] to sign up for services using them" + "having to think about people who sign up for their free trial and then sell that free-trial account's credentials on the black market [again, usually for ~$1]."

I haven't checked myself, but I would guess that like any other provider who sees this type of attack (e.g. Hetzner), Oracle Cloud likely has hardened registration flows that reject identities + cards from certain parts of the world; traffic fingerprinting heuristics that immediately shut down free trials if they start up a DDoS attack or the like; etc.

Which is something the other clouds get to skip thinking about entirely, by not having a true "free trial" with a prepaid model, and instead just offering e.g. a one-time $300 sign-up-bonus account credit.

---

But remember, we're only talking about the "free trial" here — something you only get access to for the first 30 days.

Oracle's free tier — the thing you have after the first 30 days — is no different than the one every other IaaS offers. It needs a billing account populated by your credit card; there's infrastructure to allow you to automate control-plane actions in response to billing thresholds being hit, but no offering that will wire anything up for you; etc.

In Oracle Cloud's free tier, you can set budget limits that will prevent new costed resources from being leased while your account is over that limit in a given month (which is certainly nice) — but those budget limits don't affect ongoing usage-based-billing of a resource. Your FaaS endpoints will continue to accrue vCPU-seconds of billed usage, until you — or some automation you wrote — shuts them off.


stop putting stuff on the internet you don't understand.


When I google ANATEL, it comes up as Brazil


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: