Hacker Newsnew | past | comments | ask | show | jobs | submit | reportt's commentslogin

Also could be causing user informational dissonance. You are potentially reading the "FireFox Version" of the site, and your NotebookLM is chomping away on the "AI Version" of the site, and they can be wildly different. And you won't even know because you don't see the "source" of the "AI Version". What are we gonna do, upload everything ourselves, manually?


How about giving humans the ability to read the AI version? In my browser I can already select different page styles (eg viewing the print version), so this doesn't seem too impossible.


> This is no different than the decades-old technique of "cloaking", to fool crawlers from Google and other search engines. Isn't this more "Hey, why is this website giving my NotebookLM different info than my own browser?" You reading Page_1 and the machine is "reading" a different Page_2, what's the difference between that information?

I'm reading this less as

> "We serve different data to Google when they are crawling and users who actually visit the page"

and more

> "We serve the user different data if they access the page through AI (NotebookLM in this case) vs. when they visit the page in their browser".

The former just affects page rankings, which had primarily interfaced with the user through keywords and search terms -- you could hijack the search terms and related words that Google associated with your page and make it preferable on searched (i.e. SEO).

The latter though is providing different content on access method. That sort of situation isn't new (you could serve different content to Windows vs. Mac, FireFox vs. Chrome, etc.), but it's done in a way that feels a little more sinister -- I get 2 different sets of information, and I'm not even sure if I did because the AI information is obfuscated by the AI processes. I guess I could make a browser plugin to download the page as I see it and upload it to NotebookLM, subverting it's normal retrieval process of reaching out to the internet itself.


> and because it results in people asking to talk to me instead.


> Does anyone have any other practices they can recommend for managing these type of projects?

Honestly, the only way around these sorts of issues is to utilize automation in some form.

I've found that setting up repositories (like devpi[0], Artifactory[1], or Docker Registry[2]) on a shared network location (for your project, could be local if you work alone) and using CI/CD tools (like Jenkins[3]) are the key. The goal is that you end up working on one portion of the code base at a time, and those need to go through the standard validation processes so that you can pull in the updated package version when you work on something down-stream. You making sure that the CI/CD environment _doesn't_ have access to other packages's non-versioned code is key for making sure things actually work as expected.

For example, if you have FooLib, and you need an update in that for BarApp, then even if you branch FooLib 1.2.3 to 1.2.3-1-gabc1234d (the `git describe` of the commit) on `feat/new-thingy` , then even if BarApp v2.3.4-1-gaf901234 depends on that new branch, it shouldn't be in any way able to reference that branch on the CI/CD build process. How do you get around this? Good development -- finish the FooLib branch, get that working, merge it in with the updated version, and push the package (with the new version) to the CI/CD-accessible repository. At that point, when you push your BarApp change, it can actually build and not die. But until FooLib has got a versioned update, BarApp's branch _shouldn't_ be able to build.

The statement of "But I want to work on the changes locally, in parallel" is valid. That's what local development is for -- giving you space to work on related things that don't impact the upstream codebase. You should have the option to utilize FooLib's branch code in your BarApp code locally, and you can often do that via things like `pip install` or `maven install` or whatever the relevant local install command is. At this point, the package still probably has the same version number, so the local build doesn't trigger issues. You can work on the two and tweak and twist as you want, but refrain from actually trying to push BarApp referencing FooLib's branch until it's actually in the repo.

This all takes a great deal of restraint and patience. The goal here is make it just a tad harder to introduce problems somewhere since you can't depend on something that hasn't been given the go-ahead. While there might be a lot of "Updated FooLib requirement to v1.2.4" throughout your codebase, why are you doing that just off-hand? If you are doing it because of a security issue or bug, let that be known in the commit message. If you are doing it because you can utilize a new feature/whatever, your commit message won't be just "Updated FooLib", you likely are doing "Added Feature X2Y, updated FooLib to 1.2.4".

PHP I try not to touch much, simply because I've always had bad experiences. I know for a fact that there are decent ways to do it with build tools like Maven[4], setuptools[5], and Docker[6]. Hell, I have used Docker as a way to introduce versioned dependency packaging, only needing to use Docker Registry (each dependent project does a multi-stage build, pulling in the dependencies via the versioned package images).

---

[0]: https://devpi.net/docs/devpi/devpi/latest/%2Bd/index.html

[1]: https://jfrog.com/artifactory/

[2]: https://docs.docker.com/registry/

[3]: https://www.jenkins.io/

[4]: https://maven.apache.org/

[5]: https://setuptools.readthedocs.io/en/latest/

[6]: https://www.docker.com/


> But until FooLib has got a versioned update, BarApp's branch _shouldn't_ be able to build.

This is such a horrible practice. You're creating mountains of extra work, and encouraging devs to delay integration testing, which is certain to lead to cycles of rework. It also only 'works' on toy features. When you're building a complex feature that requires a few weeks of work and a few devs, it quickly breaks down, and further prevents early QA testing of the new feature itself.

Unfortunately this practice is often forced on people by reliance on the horrid SemVer scheme, which only makes any kind of sense for 3rd party dependencies, but is foisted on internal dependencies as well by many idiotic package managers, like Go mod or NPM.


I find that this pain is a symptom of complecting. If you well and truly can't test your code to 80% or better confidence until other feature comes online, well then maybe there are insufficient separation of concerns.

Typical CRUD stuff should be like at least 80% purely functional business logic (that is 100% testable without integration) and 20% or less IO code. If you really need that integration to find all the rough edges and work out the bugs, you probably have too much surface area in your IO "tainted" code.

Java-style-OOP really encourages this sort of thing by subconsciously compounding data state with functional methods. The whole "I needed a banana and you gave me a whole jungle" problem.


Integration vs component testing has almost nothing to do with functional vs state/IO heavy work. Instead,it has everything to do with the amount of effort you spend in specifying your components and writing test cases. If we're building a feature where componentA must call componentB with some data structure to achieve some goal, we can formally specify all valid inputs of componentB and their semantics, and write tests for each combination etc; and then have componentB religiously stick to the same; or we can agree in more informal terms on the expected input values and rely on integration testing to put them together and make sure we are achieving the right result for the expected inputs to componentA.

For some problems, the formal specification is tractable and even necessary. But for many complex problems, it is either not tractable (the input is too complex, you would need on the order of magnitude of componentB to actually specify the semantics) or its just not worth it (componentB is only called by componentA).

I also want to note that I'm not talking about regular types when I say 'formally specifying the valid inputs and their semantics', though I'm sure dependent types could in principle achieve this. I'm talking about cases like components which comunicate though script-like objects or configuration templates etc.


If you have a CI system your version patch will always increase and you can then always integrate the latest dev version. Most of your developer issues will be someone not using the latest versions for everything.


How will that work with in development branches? At any one time, there are multiple sub-teams developing multiple independent sub-features all impacting some of the same components. How are they supposed to do this if they can't branch out the components and each work on their own independent branch of the integrated application? There isn't a single 'latest dev version', there are many.


In lerna you can restrict versioning to a particular branch. So you check out your feature branch, work until it's ready to share and then merge it back into master and create a version.

Creating a version tags the commit with the version number for each package that's been updated and it allows for the creation of pre-release versions. If you have things that aren't ready for prime time.

Consumers can depend on a particular git commit by referencing the tag.

So there is one main branch that contains all of the commits, but different components are versioned independently and reference particular commits in the branch.


Integrate often enough that nothing has diverged enough for that to be a problem. Short lived branches are good, long lived branches get into the problem you speak of.


We're back to things that only work for small changes. Large features that need days or weeks of work before they can be mainlined aren't a rare occurrence, they are the norm, and usually generate the most value for a product.

Not to mention, you often need to polish a release while developing large features for the next release - again cases where you need branches.

Of course, you can also try to take the feature flag model, and avoid refactoring entirely. Unlikely to be a good strategy for a long lived product.


I agree. Having a huge mono repo is basically throwing the towel in the ring with your automation/package management/dependency validation.

In the .net world your CI/CD pipeline should continuously build and publish NuGet packages of your common code as you make changes. Since the old versions are obviously still available, other parts of the system are not forced to be updated to the new version of the dependency.


Thanks for those links! I will check that out.

I will say one problem I have is in refactoring the interfaces of my modules, which is what I seem to spend a lot of time one, at least in this stage of the project. When I am updating the bottom ones, I pretty much have to update the others in parallel.


Yeah, that's understandable. Don't be afraid to have refactored changes on other repos locally, just be sure to do the package version updates first.


I know, right? I mean, I paid my rent on time for the entire pandemic due to having a job. All those people who have back rent and possibly lost their jobs and only now are getting them back should also have to meet the same financial burden!

Or, ya know, accept that people need a place to stay and recognize that the pandemic was a global disaster that many places were unable to handle, and kicking people out now might be a bigger second order problem.


Why should landlords maintain their profit margins over this period while tenants who lost their jobs lose all their savings and disposable income?

Landlords, even if, as a business, they don't have an 18 month cash buffer, have equity. If a landlord is hard-up they can attempt to refinance, or go bust and sell their assets (the property) to another more successful (perhaps less levered up) landlord. Rental yields are something like 3-5%, so 18 months of lost income is only an additional 5-8% LTV.

The state should have decreed that anyone who can show hardship (tenant or landlord) be granted interest-free payment holidays for the duration, and made it illegal for leveraged landlords not to pass these on.


> Why should landlords maintain their profit margins over this period while tenants who lost their jobs lose all their savings and disposable income?

They shouldn't, but the fifth and fourteenth amendments exist and retroactively voiding the debt (including forced settlement at a reduced state-judged-as-fair rate looks a lot like either a taking or a deprivation of property without substantive due process, so it is, at best, a magnet for extended and uncertain litigation if anyone isn’t happy with it, and someone won't be; full-value settlement is the fastest way to resolve the adverse impacts of the crisis on renters and landlords, preventing a wave of evictions and/or a wave of ruin for innocent landlords. Is it perfectly just? No. But its almost certainly the resolution that deals with the immediate problem in a way which has the least risk of being derailed by litigation.


Don't those amendments apply to individuals and not businesses?

Also, the state already entered the game when evictions were suspended. If the choice from your lender as a landlord is bankruptcy or having your mortgage term extended by 18 months, with an 18 month interest-free payment holiday now, if you have hard-up tenants, what would you choose?


> Don't those amendments apply to individuals and not businesses?

(1) Landlords arr often ibdividuals, and (2) in any case, no, they apply to legal persons, generally.

> Also, the state already entered the game when evictions were suspended

There's a reason for different tolerance for litigation risk in those two actions, but in any case by providing essentially full-value compensation for the losses due to the earlier action, this action is not only on firmer ground viewed individually, but also greatly mitigates remaining litigation risk from the earlier action.


So just make them pay it off later on or in small chunks monthly until it's paid off. There's no reason whatsoever for making other people pay for it for them. I'm tired of other people being so damn generous with my money.


The only time in which you would need blockchain over a centralized DB is when everyone needs access to the DB, but no one trusts each other. Like if you needed to store healthcare information for multiple people which theoretically could be accessed by multiple competing hospitals or something. It's a lot of overhead to legitimize not just using a centralized DB for most applications.


That use case is highly suspect. The reason sharing medical records is difficult is because medical records are sensitive information. Blockchain technology does not solve this problem, and in fact makes it worse. By the very nature of blockchain every single hospital would now need to store all of the medical information of everybody in the country. That means if a single hospital is compromised (which is inevitable) the medical history of every single person in the country will be leaked. At that point, you might as well just place everybody's medical records online for all to see, the end result will be very similar.

Hospitals could encrypt this information inside the blockchain, but then they would need to contact each other for the keys, which defeats the entire point of the blockchain. At that point they might as well contact each other for the data, after all.

Using blockchain technology also means you cannot abide by GDPR's right to forget. That means storing any personal information in a blockchain is a legal liability, at least in the EU, which limits its potential use cases even further.


> Using blockchain technology also means you cannot abide by GDPR's right to forget. That means storing any personal information in a blockchain is a legal liability, at least in the EU, which limits its potential use cases even further.

Yes indeed, blockchains aren't suitable for storing personal data. No public decentralised network is. IPFS wouldn't be either for instance. By nature such networks assume no one peer can be trusted. Not the sort of thing you want to trust with confidential information.

This openness is often sold as a feature, for example running your supply chain through a blockchain means there is a publicly auditable ledger of every step, so a company claiming to have an ethical supply chain could theoretically point to that blockchain as proof. This is potentially interesting imo.

Most enterprise blockchain solutions I've seen are hybrid ones. A business can store personal data in a regular database and have non-sensitive data on a blockchain (as per the example above). The advantage of this (outside of the above use case) being that you can run SQL queries on large data sets much faster if that data is stored on a distributed ledger.

(Note: this is what the companies selling those solutions claim. I don't have first hand experience with blockchains in an enterprise context so I couldn't tell you if this is true - I imagine it'd depend on the database and blockchain in question.)

Final point though, these aren't necessarily proprietary blockchain solutions as such. They're more like SDKs businesses can use to build their own blockchains, code their own smart contracts, etc.

Whether this is actually useful for enterprise... I honestly couldn't tell you. I do think the supply chain stuff is interesting. I also think using it as a system to detect counterfeit items is another good use case for businesses. In the past, companies have created apps where you can scan a QR code and it confirms the legitimacy of the product, but counterfeiters just made QR codes that tricked the app. If each unit is tracked on a public blockchain, it should be possible to verify legitimacy with near 100% certainty.


> so a company claiming to have an ethical supply chain could theoretically point to that blockchain as proof.

No. A company can point at an entry that claims to be from an ethical supply chain. It's not proof that that entry actually represents reality.

Same for every other entry in the supply chain.

> you. I do think the supply chain stuff is interesting.

It's not. For the reason above.

> If each unit is tracked on a public blockchain, it should be possible to verify legitimacy with near 100% certainty.

Because a publicly available hash on a publicly available blockchain is different from a QR code and cannot be spoofed... how?


> Because a publicly available hash on a publicly available blockchain is different from a QR code and cannot be spoofed... how?

It can not be spoofed due to having been signed by a verified key (a signature could of course be encoded in a QR code as well!)

It can not be redacted, or retroactively inserted at a later point in time. The link/hash can of course also be encoded in a QR code.


> It can not be spoofed due to having been signed by a verified key

Let's say I have a Rolex watch. This "item" has an entry in a publicly available blockchain available to everyone. Who's to stop anyone from producing a "Rolex" watch pointing to the exact same entry on the blockchain?

> The link/hash can of course also be encoded in a QR code.

Indeed. So how exactly does blockchain protect against counterfeit goods?


Whenever the key is checked, the software that checks the key creates an entry in the blockchain.

Using your example of a Rolex, the code can be scanned by the authorised dealer and buyer. Those events are then stored in the blockchain next to the cryptographic hashes of both entities.

Any authorised dealer who buys one for resale would scan the QR code so ownership can be transferred in the same way on a public ledger.

If there's a public record that this Rolex has been purchased already and you scan it, this record would show up. It could even show exactly where and when it was purchased.

Clearly, for someone to put a real cryptographic key on a fake Rolex, they need to have taken it from a real one.

So if someone tries to sell you a "Rolex" and you scan it, you'll have the history of the watch right there. If they try to claim it's new, you'll know that's a lie. If they try to sell it to an authorised dealer, they'll get caught.

This could still leave space for fake Rolexes to be sold as used on eBay or something of course, but then if you buy a "Rolex" on eBay from a random seller (not an AD) you kind of know what you're getting already don't you?

(Although even in those situations, knowing exactly when and where the real watch was last purchased makes it easy to just make a phone call and get a better idea of legitimacy. Currently, even ADs send the watches to Rolex for verification because the fakes are so good.)


Your example seems to be mostly coordinated through Rolex and authorized dealers, so what is the advantage of a blockchain? Why not just have Rolex keep track of who owns a watch through e.g. a web interface tied to a serial number + username/password combination, that also allows transferring of the ownership of a watch to a different user? This would be much more user friendly, since you could have features such as "forgot my password" which are (by design) impossible to implement on top of a blockchain.


> Whenever the key is checked, the software that checks the key creates an entry in the blockchain.

Right. So on top of a blockchain there's some software that inputs something on the blockchain.

What's to stop me from creating software that won't create those events, but will still check the key?

> Any authorised dealer

> so ownership can be transferred in the same way

Curioser and curioser. So now there are centralised dealers that can transfer ownership. So only selected few can create events on the great decentralised blockchain. Tangential question: if I want to give the watch as a gift, do I have to have Rolex's blessed authorised software to do that?

Also, if "authorised dealers" have the power to do this, it means they have the cryptographic keys. This also means that the rest of the world has them.

> So if someone tries to sell you a "Rolex" and you scan it, you'll have the history of the watch right there.

Indeed. So, the counterfeit watch comes up with a real history. Rolex produces almost a million watches a year. It will be ridiculously easy to pick up numbers for the counterfeit watches that are new.

Those that are not "new" can be sold at second hand markets.

> but then if you buy a "Rolex" on eBay from a random seller (not an AD) you kind of know what you're getting already don't you?

Ah. And here it is: "blockchain can help verify authenticity with near 100% certainty" devolves into "you know what you're getting into" in the span of three comments.


> Who's to stop anyone from producing a "Rolex" watch pointing to the exact same entry on the blockchain?

No one. But only one is signed by Rolex's keys, and therefore considered legitimate.

> Indeed. So how exactly does blockchain protect against counterfeit goods?

It depends on who you are if it does or not. It can 1) prevent inconsistencies in different databases in different orgs, 2) prevent companies trying to hide their tracks or muddy the waters, 3) provide near-instantaneous settlement and coordination

Let me ask you this; if you buy a Rolex watch on eBay that includes a QR code as proof of authenticity, how can you be confident that the same QR code has not been included with 100 other duplicate watches otherwise?

(I had this happen with fake Bose headphones, BTW. A correct blockchain implementation would have allowed me to spot that within minutes of receiving the package as well as irrefutable proof to present to eBay/law enforcement, as opposed to months later when they failed and vague evidence)


> But only one is signed by Rolex's keys

How do you sign a physical watch with keys?

> Let me ask you this; if you buy a Rolex watch on eBay that includes a QR code as proof of authenticity, how can you be confident that the same QR code has not been included with 100 other duplicate watches otherwise?

I can't be confident. So, once again, how does blockchain help?

> A correct blockchain implementation would have allowed me to spot that within minutes of receiving the package as well as irrefutable proof to present to eBay/law enforcement

- What's a "correct blockchain" and who implements it?

- How would it help if both watches/headphones/whatnot point to the same record in the ledger?

- more in two comments to this: https://news.ycombinator.com/item?id=27435785


> How would it help if both watches/headphones/whatnot point to the same record in the ledger?

The watch has an ID/serial number. The record on the ledger is transferred to the new owner. If both new owners check the ledger, only one of them will have been assigned the watch with the corresponding SN.

the payment could even be done atomically with the assignment of the (authentic) watch. As long as the buyer validates it, the only one who could forge watches would be Rolex.


> The record on the ledger is transferred to the new owner.

This. How does this magical transfer happen? The moment you say "authorised resellers", please read comments to this: https://news.ycombinator.com/item?id=27435785

> the payment could even be done atomically with the assignment

What's to stop an automatic payment with the assignment of the counterfeit watch?

> As long as the buyer validates it, the only one who could forge watches would be Rolex.

Why?


It sounds like you need to review some blockchain fundamentals. None of the above should be unclear in any way if you have a basic understanding. I could go on and try to address your questions, but really, at this point you may as well just be trolling. The way you have been strongly arguing above is not founded on reality if your last questions are honest.


> No. A company can point at an entry that claims to be from an ethical supply chain. It's not proof that that entry actually represents reality.

Sure, simply writing metadata that says "we promise we did this" into a blockchain doesn't automatically make it proof.

But that's not what anyone talks about when they discuss this.

The point is each company down the supply chain is recorded on the blockchain. The companies used to provide raw metals to the companies that run the factories to the distributors, all cryptographically sign the blockchain throughout production.

What you get at the end of that is cryptographic assurance that each party is who they claim to be and they publicise their practices.

If a someone in the supply chain is found to be using unethical practices, and the company using this approach makes a public statement promising they will use a more ethical supplier, this would be verifiable by any member of the public.

And of course all the actual software backing this would be in smart contracts meaning the source code of the actively running software on the blockchain can also be verified by anyone. This is like having reproducible open source builds but for real life objects.

TL;DR: Quite obviously, a blockchain doesn't magically turn everything ethical, but it is a tool that could well be used for that purpose if utilised correctly and combined with other public knowledge such as public audits of factories and mines and increasing regulations enforcing supply chain transparency reports etc.

It's a piece of the larger puzzle that means when a company claims to be ethical you can see for yourself instead of taking their word for it.

> Because a publicly available hash on a publicly available blockchain is different from a QR code and cannot be spoofed... how?

If the entire supply chain and the code managing it is on a public ledger, so is a log of every unit produced. Blockchains carry cryptographic proofs, so a business can use a cryptographic signature to allow a buyer to verify an item's authenticity. The signature could still be on a QR code to make it easy for the end user, but it'd be a lot lot harder to fake if backed by tried and trusted cryptography.


> Sure, simply writing metadata that says "we promise we did this" into a blockchain doesn't automatically make it proof. But that's not what anyone talks about when they discuss this.

And then you immediately go and say exactly this:

> If a someone in the supply chain is found to be using unethical practices, and the company using this approach makes a public statement promising they will use a more ethical supplier, this would be verifiable by any member of the public.

What you're basically saying is: "If a company somehow records their PR stunt on the blockchain, they are immediately bound by it because public record, and blockchain, and smart contracts".

> And of course all the actual software backing this would be in smart contracts meaning the source code of the actively running software on the blockchain can also be verified by anyone.

And how would software running inside some other software would actually verify that a company is ethical? Or that it properly labels its products? Or that it adheres to standards? Or...

> combined with other public knowledge such as public audits of factories and mines and increasing regulations enforcing supply chain transparency reports etc.

All this is already being done, and without blockchain. What exactly does blockchain bring into the equation?

I mean, TIR has been around since 1975, to give just one example [1]

> If the entire supply chain and the code managing it is on a public ledger, so is a log of every unit produced.

1. Almost everyone already logs every unit produced. Even now you can probably trace an random individual apple from a supermarket to where it was produced. What does blockchain add to this?

2. As all logs, it doesn't log "every unit produced". It logs whatever is input into the log. If someone inputs "eco bananas", but instead ships radioactive slime, what good is blockchain?

Oh, and before you start with "audits" and all that. The supply chain isn't "producer -> consumer". It's "producer -> dozens of intermediaries -> consumer". And everything depends on what those intermediaries input. And there are already laws, practices and audits in place that ensure that you get your eco bananas instead of radioactive slime.

Or, lets use a more realistic example: 20% of seafood in restaurants is mislabeled, https://www.rd.com/article/restaurants-serve-fraudulent-fish... Every single item there can already be traced to origin, passes multiple inspections etc. How does blockchain help?

[1] https://en.wikipedia.org/wiki/TIR_Convention


>The only time in which you would need blockchain over a centralized DB is when everyone needs access to the DB, but no one trusts each other.

How is that not solved by a layer of permissions?


If you have one trusted party that can run a database with permission system and audit trail, then you don't need a blockchain. But if no one trusts each other, but you still require everyone to have the ability to update some data, then a blockchain becomes useful


My multiple healthcare providers use OAuth and the electronic medical record system made by Epic to exchange my medical records, no blockchain required. Apple Health obtains access to all of the data at various providers through the same mechanism.

At a higher level, if you’re willing to do business with someone, and you’re providing or accepting fiat, goods, or services, there is a baseline level of trust between parties and an understanding that any breakdowns in trust will be resolved by contract law and courts.


Replying to my own comment as the edit window has timed out.

Apple has coincidently released functionality as part of iOS 15 for verifiable health records data. Note the use of digital signature crypto primitives.

> Find out how you can securely request access to someone's verifiable health records and incorporate that data safely into your app. The Health app helps people download, view, and share their health records, including their COVID-19 immunization and test results — and iOS 15 brings support for the Smart Health Card, a verifiable health record that incorporates the FHIR health data standard. We'll show you how your app can go about requesting access to this record and how you can verify the signature of the file using CryptoKit and the issuer's public key.

https://developer.apple.com/videos/play/wwdc2021/10089/


Doesn't this only work if no one has the ability to take over 50% of the network?


Theoretically, but there are ways to protect against this. That's what proof-of-work and proof-of-stake are for instance.

Certainly there is a huge financial incentive to run a 51% attack on Bitcoin or Ethereum but thus far no one has managed it.


For most applications, yes. But the "multiple people which theoretically could be accessed by multiple competing hospitals" is actually a real use case too. In many fields competitors could benefit from having access to the same curated, real time dataset. For example, a list of pharmacies or local insurance policy plans if you're a hospital. The problem is that none of the players in the game are incentivized to build this service because it requires someone to host the server on their infrastructure - and if you don't host it yourself, what happens if/when your partner decides to shut it down, or cut you out, or serve you false information? With a small blockchain each competitor can host their own version of the data and share new rows in an auditable, traceable, and permanent way (such that another user cannot go in and delete anything).

It's easy to get lost in the hype, but peer to peer applications always serve valid use cases. A peer to peer database is no different even if it's enamored by cryptobros and scammers.


How would this solve anything? Competing for what? If a bunch of people are competing on say, price, then what's stopping them putting any number they want into the system, or similarly just rejecting every number anyone else puts into the system? They have no aligned interests in the blockchain, and outside of it the legal system already defines their responsibility (i.e. prices must be advertised accurately).

"Blockchain" aka a Git repository is only useful when multiple trusted parties are trying to coordinate a central source of truth without inadvertently removing each others changes.

At which point we can dispense with anything that sounds like cryptocurrency because it's all just signed commits.


U.S. taxpayers who make $452k single/$509k married would have their top marginal tax rate increase from 37% to 39.6%. This will happen in 2025 anyways, assuming Congress does with regards to the Tax Cuts and Jobs Act.

The people who this applies to generally fall into the Top 5% of Earners tax bracket (Top 5% in 2018 was $309k+). [1]

[1]: https://www.investopedia.com/personal-finance/how-much-incom...


> taxpayers who make $452k single/$509k married

I wish I was so unlucky!


> If someone found an exploit in your web app container [...]

A good pattern here is to reverse proxy all requests to the application through something like nginx. My applications tend to have a back-end application that is not accessible to the internet, with an nginx instance that proxies all API requests itself. Only port 80 is public facing. If someone can get console access to an nginx container and then use that to springboard to another container and get root access there (again, where the only open ports are ports 80, maybe 8000?) to get envvars, they should get access to it all.

If you are worried about secrets, check out the Docker Compose 3.9 documentation: https://docs.docker.com/compose/compose-file/compose-file-v3...


Using the docker-compose.override.yml helps with that. In local you can just have those services/configurations added, and then the production environment doesn't even need to know they exist.


The idea is that you have a core structure that is used in both local/dev environment and the prod environment, with just the override applying changes. For example, your local override might have the "build" configuration, whereas the dev environment or production environment docker-compose should not have that configuration (assuming you are pulling from a private registry). That single core docker-compose.yml can be copied to any environment it needs and it should just run the app, but if you really, really need to override something for a specific reason, it's not hard to do it without modifying the core docker-compose.yml.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: