You just need to get a publicly trusted CA to mint a certificate for your new site.
This can be done, for example, with let’s encrypt, using several of the various domain verification challenges they support.
There are some protections against this, such as CAA records in DNS, which restrict which CAs can issue certs and depending on the CA which verification methods are allowed. That may not provide adequate protection.
For example if you are using LE and are using verification mechanisms other than DNS then the attacker could trick LE to issuing it a cert.
That also depends on the security of DNS, which can be tricky.
So, yes, BGP hijacks can be used to impersonate other sites, even though they are using HTTPS.
When you configure your domains,
Make sure you setup CAA, locked down to your specific CA, and have DNS sec setup, as a minimum bar. Also avoid using DV mechanisms that only rely on control over an IP address, as that can be subverted via BGP.
There is no right way to do Spring Boot.The entire idea is broken.
Dependency injection is good. It makes it possible to test stuff.
Automagic wiring of dependencies based on annotations is bad and horrible.
If you want to do dependency injection, you should do it the way Go programs do it. Create the types you need in your main method and pass them into the constructors that need them.
When you write tests and you want to inject something else, then create something else and pass that in.
But the idea that you create magic containers and then decorate packages or classes or methods or fields somewhere and then stuff suddenly gets wired into something else via reflection magic is a maintenance nightmare. This is particularly true when some bean is missing, and the one guy who knows which random package out of hundreds has that bean in it is on vacation and the poor schmucks on his team have no clue why their stuff doesn't work.
Manual dependency injection is fine, but it doesn't scale. Especially when you start refactoring things and dependencies need to be moved around.
The other issue is dynamic configuration. How do you handle replacing certain dependencies, e.g. for testing, or different runtime profiles? You could try to implement your own solution, but the more features you add, the closer you'd get to a custom DI framework. And then you'd have an actual mess, a naive non-standard solution for a solved problem, because you didn't want to read the manual for the standard implementation.
By the way, Spring dependency injection is mainly based on types. Annotations are not strictly necessary, you can interact with the Spring context in a procedural/functional manner, if you think that makes it better. You can also configure MVC (synchronous Servlet-based web) or Webflux (async web) routes functionally.
When a bean is missing, the app will fail to start, and you will get an error message explaning what's missing and which class depends on it. The easiest way to ensure this doesn't happen is to keep the empty @SpringBootTest test case that comes with the template. It doesn't have any assertions, but it will spin up a full Spring context, and fail if there is a configuration problem.
The only complicated part about Spring Boot is how the framework itself can be reconfigured through dependency injection. When you provide a certain "bean", this can affect the auto-configuration, so that other beans, which you might expect, are no longer automatically created. To debug this behavior, check out the relevant AutoConfiguration class (in your IDE, use the "go to class" shortcut and type something like FooAutoConfi..., e.g. JdbcAutoConfiguration).
In a good codebase, the configuration itself would be tested. For instance, if you did something a bit more complicated like connecting two JDBC databases at the same time, you would test that it read the configuration from the right sources and provides the expected beans.
To be honest, I’m not surprised that GitHub has been having issues.
If you have ever operated GitHub Enterprise Server, it’s a nightmare.
It doesn’t support active-active. It only supports passive standbys. Minor version upgrades can’t be done without downtime, and don’t support rollbacks. If you deploy an update, and it has a bug, the only thing you can do is restore from backup leading to data loss.
This is the software they sell to their highest margin customers, and it fails even basic sniff tests of availability.
Data loss for source code is a really big deal.
Downtime for source control is a really big deal.
Anyone that would release such a product with a straight face, clearly doesn’t care deeply about availability.
So, the fact that their managed product is also having constant outages isn’t surprising.
I worked on GHES for a couple of years. Before that, it sounded like it was a sort of volunteer rotation, there wasn’t durable funding for a team when we joined. Mind boggling that the money maker of the company was staffed like that.
It is a complicated project. Thankfully their durable funding story has improved in recent years and they are staffing GHES up at levels they haven’t for at least 7 years. Hopefully it improves. I’m not there anymore, I was laid off last year.
I use a DGX spark, with Cosmic as my DE, and it's super awesome.
This is a bit of a franekin-distro, as it's ubuntu + nvdia packgages + system 76 packages, but it works pretty well.
I've been using Flatpack chromium, which is ok for most things. It performs a bit better than Firefox does. Having access to official Chrome will be nice though, as it should come with Widevine support. Chromium doesn't support DRM, so some things like Netflix don't work.
“we find that the LLM adheres to the legally correct outcome significantly more often than human judges”
That presupposes that a “legally correct” outcome exists
The Common Law, which is the foundation of federal law and the law of 49/50 states, is a “bottom up” legal system.
Legal principals flow from the specific to the general. That is, judges decided specific cases based on the merits of that individual case. General principles are derived from lots of specific examples.
This is different from the Civil Law used in most of Europe, which is top-down. Rulings in specific cases are derived from statutory principles.
In the US system, there isn’t really a “correct legal outcome”.
Common Law heavily relies on “Juris Prudence”. That is, we have a system that defers to the opinions of “important people”.
Arguing that this is a Common Law matter in this scenario is funny in a wonky lawyerly kind of way.
The legal issue they were testing in this experiment is choice of law and procedure question, which is governed by a line of cases starting with Erie Railroad in which Justice Brandies famously said, "There is no federal common law."
I don't think that common law doctrine applies here though. The facts of any particular case always apply to that specific case no matter what the system. It is the application of the law to those facts which is where they differ, and in common law systems lower courts almost never break new ground in terms of the law. Judges almost always have precedent, and following that is the "legally correct" outcome.
Choice-of-law is also generally a statutory issue, so common law is not generally a factor - if every case ever decided was contrary to the statute, the statute would still be correct.
Remember the article that described LLMs as lossy compression and warned that if LLM output dominated the training set, it would lead to accumulated lossiness? Like a jpeg of a jpeg
A Socratic law professor will demoralize students by leading them, no matter the principle or reasoning, to a decision that stands for exactly the opposite. GPT or I can make excuses and advocate for our pet theories, but these contrary decisions exist, everywhere.
I am comforted that folks still are trying to separate right from wrong. Maybe it’s that effort and intention that is the thread of legitimacy our courts dangle from.
Some of this is people trying to predict the future.
And it’s not unreasonable to assume it’s going there.
That being said, the models are not there yet. If you care about quality, you still need humans in the loop.
Even when given high quality specs, and existing code to use as an example, and lots of parallelism and orchestration, the models still make a lot of mistakes.
There’s lots of room for Software Factories, and Orchestrators, and multi agent swarms.
But today you still need humans reviewing code before you merge to main.
Models are getting better, quickly, but I think it’s going to be a while before “don’t have humans look at the code” is true.
I don’t have experience with dependabot at all. I didn’t realize it was satire. I just kept thinking, “This sounds like terrible advice. This can’t be right.”
The description from the summaries sound very flawed.
1. They only tested 2 Radiologists. And they compared it to one model. Thus the results don’t say anything about how Radiologists in general perform against AI in general. The most generous thing the study can say is that 2 Radiologists outperformed a particular model.
2. The Radiologists were only given one type of image, and then only for those patients that were missed by the AI. The summaries don’t say if the test was blind. The study has 3 authors, all of which appear to be Radiologists, and it mentions 2 Radiologists looked at the ai-missed scans. This raises questions about whether the test was blind or not.
Giving humans data they know are true positives and saying “find the evidence the AI missed” is very different from giving an AI model also trained to reduce false positives a classification task.
Humans are very capable at finding patterns (even if they don’t exist) when they want to find a pattern.
Even if the study was blind initially, trained humans doctors would likely quickly notice that the data they are analyzing is skewed.
Even if they didn’t notice, humans are highly susceptible to anchoring bias.
Anchoring bias is a cognitive bias where individuals rely too heavily on the first piece of information they receive (the "anchor") when making subsequent judgments or decisions.
They skewed nature or the data has a high potential to amplify any anchoring bias.
If the experiment had controls, any measurement error resulting from human estimation errors could potentially cancel out (a large random sample of either images or doctors should be expected to have the same estimation errors in each group). But there were no controls at all in the experiment, and the sample size was very small. So the influence of estimation biases on the result could be huge.
From what I can read in the summary, these results don’t seem reliable.
The did NOT test radiologists. There were NO healthy controls. They evaluated AI false negative rate and used exclusively unblinded radiologists to grade the level of visibility and other features of the cancer.
Utility of the study is to evaluate potential AI sensitivity if used for mass fully automated screenings using mammography data. But says NOTHING about the CRUCIAL false positive rate (no healthy controls) and NOTHING about AI vs. human performance.
Huh? I was commenting that there were no controls and the doctors were given skewed data, so any conclusions of ai ability vs Dr ability seem misplaced. Which seems to be what you just said… so I am confused about what I said that was inaccurate.
Can you clarify?
I also hinted at the fact that I only had access to the posted summary and the original linked article, and not the study. So if there is data I am missing… please enlighten me.
I was just reinforcing that point as your comment was worded in a way that left room for doubt. Sorry if this came across as critical toward you or implying you held a different interpretation.
The article has the headline "AI Misses Nearly One-Third of Breast Cancers, Study Finds".
It also has the following quotes:
1. "The results were striking: 127 cancers, 30.7% of all cases, were missed by the AI system"
2. "However, the researchers also tested a potential solution. Two radiologists reviewed only the diffusion-weighted imaging"
3. "Their findings offered reassurance: DWI alone identified the majority of cancers the AI had overlooked, detecting 83.5% of missed lesions for one radiologist and 79.5% for the other. The readers showed substantial agreement in their interpretations, suggesting the method is both reliable and reproducible."
So, if you are saying that the article is "not about AI performance vs human performance", that's not correct.
The article very clearly makes claims about the performance of AI vs the performance of doctors.
The study doesn't have the ability to state anything about the performance of doctors vs the performance of AI, because of the issues I mentioned. That was my point.
But the study can't state anything about the sensitivity of AI either because it doesn't compare the sensitivity of AI based mammography (XRay) analysis with that of human reviewed mammography. Instead it compares AI based mammography vs human based DWI when the humans knew the results were all true positives. It's both a different task ("diagnose" vs "find a pattern to verify an existing diagnosis") and different data (XRay vs MRI).
So, I don't think the claims from the article are valid in any way. And the study seems very flawed.
Also, attempting to measure sensitivity without also measuring specificity seems doubly flawed, because there are very big tradeoffs between the two.
Increasing sensitivity while also decreasing specificity can lead to unnecessary amputations. That's a very high cost. Also, apparently studies have show that high false positive rates for breast cancer can lead to increased cancer risks because they deter future screening.
Given that I don't have access to the actual study, I have to assume I am missing something. But I don't think it's what you think I'm missing.
I use Cosmic on a DGX Spark, as my daily driver, and it works pretty well.
They don’t have a pop os iso for arm64, but they do have arm64 Debian repo. So I just took DGX os (what Nvidia ships on the device), added the pos os “releases” repo, and installed cosmic-session.
It works like a charm and provides a super useful tiling experience out of the box.
This is replacing my M3 Pro as my daily driver and I’ve been pretty happy with it.
I recently upgraded to an ultrawide monitor and find the Cosmic UX to be hands down better than what I get in the Mac with it.
If you want a Linux desktop with the productivity boost of a tiling window manager with a low learning curve, it’s pretty good.
How often do they ship new versions?
My understanding is that:
1. Windows drivers are Attested by Microsoft
2. Windows collects driver telemetry
Which means a really good question to ask is:
Why are they canceling driver signing accounts without looking at metrics?
reply