Hacker Newsnew | past | comments | ask | show | jobs | submit | pvtmert's commentslogin

while I understand the meaning here, modern Excel does handover data to Microsoft (via Copilot)...

And 365 (I'm sure there is an on-premises version, but when not).

I really have never heard of on prem 365 deployments, I think any confidentiality is handled via contracted promises with legal ramifications for breaking. With Azure GovCloud for instance there’s no encryption / user key custody on the one drive side, everything you do is uploaded to Microsoft and they maintain keys, they just hire people who passed a background check to run the infrastructure, US nationals only etc

There is on prem office.

Government and 365 is weird.

Non-military entities use “Government Community Cloud”, which is an environment where data is stored in segmented areas of Microsoft data centers, but everything else is on commercial infrastructure.

You absolutely can host keys as a customer.

The Microsoft approach to all of this stuff is insane.


Good to know, I shouldn't be surprised anything is possible once you get into having an account executive catering to you.

Not for org/enterprise licenses.

There is virtually no consequences or accountability when big-tech companies share private data. For crying out loud, they were caught red handed sharing private data from their EU endeavors.

If even sovereign states with clear laws forbidding such behavior can't keep those companies in check, no enterprise/b2b can.


Users choose whether to use Copilot, and are free to decline it's use.

How do I decline it?? I keep clicking no, hide, not interested, cancel, etc. but it keeps showing up and activating...if I had a nickel for every time I clicked it on accident in Azure because a layout shift moved it under my mouse when trying to press a button I would have a lot of nickels. It even showed up as an app on my phone because I guess the Office 365 entry got hijacked...

Your Entra Admin like your Google workspace admin can publish or remove features from user availability.

To be honest, during my ~10 years of experience, I haven't stumbled on any _project_ that has _sufficient_ amount of either testing, static analysis, and/or, where everyone who contributes gives a damn about linter/formatter warnings or errors.

Unless there is a CI that fails immediately when these checks fail, and it is not possible to bypass/override, then of course quality in the codebase improves significantly.

But that is very hard to give business results, in terms of ROI, 99.9% of the PMs will sweep these under the rug, because of unclear ROI.

Another remark I have to make is that, these things should be in a cloud/container (isolated environment) somewhere. Because I know several people who has their git commit aliased with no-precommit-hooks, because well, otherwise they complain that it is too slow. (Slowness is another issue that can be improved, but again, less visible ROI than just adding the command-line switch)


I am pretty sure this will get fixed as Amazon had depended on this feature to resolve internal domains through the ACME managed openvpn-service. Rather than overriding global DNS settings which may break on certain hotspot configurations.

Nope, they are required to have an option to opt-out from adapter. They choose to charge for one!

https://9to5mac.com/2025/10/16/no-the-eu-didnt-ban-apple-fro...


The option to opt out is effectively the same as charging to include one, unless you include the variation where you can opt out and pay the same price to not get one.


You know how we know its not that because its 3,599 without the power adapter and 3,678 with it which of those prices seems like the intentional cost of the machine


I know EU didn't ban chargers, but the common American sentiment somehow molded into that.

It is interesting to see how mass-propaganda is playing out right before our eyes...


Having mixed feelings on word "actually" as it is/was one of my favorites. Other stuff like "for instance" and "interestingly" are seem to be getting there too...


Honestly, first paragraph sounds more human and sincere for sure.

Also adding better "context" into the discussion, than the usual claims/punchlines of marketing-speak.

Maybe it's not exactly the grammar itself but also overall structuring of the idea/thought into the process. The regular output sounds much more like marketing-piece or news-coverage than an individual anyway. I think, people wanna discuss things with people, not with a news-editor.


> I think, people wanna discuss things with people, not with a news-editor.

If I understand you correctly, then Yes I completely agree, but my worry is that this can also be "emulated" as shown by my comment by Models already available to us. My question is, technically there's nothing to stop new accounts from using say Kimi and to have a system prompt meant to not sound AI and I feel like it can be effective.

If that's the case, doesn't that raise the question of what we can detect as AI or not (which was my point), the grand parent comment suggests that they use intentionally bad human writing sometimes to not be detected as AI but what I am saying is that AI can do that thing too, so is intentionally bad writing itself a good indicator of being human?

And a bigger question is if bad writing isn't an indicator, then what is?

Or if there can even be an good indicator (if say the bot is cautious)? If there isn't, can we be sure if the comments we read are AI or not

Essentially the dead-internet-theory. I feel like most websites have bots but we know that they are bots and they still don't care but we are also in this misguided trust that if we see some comments which don't feel like obvious bots, then they must be humans.

My question is, what if that can be wrong? It feels to me definitely possible with current Tech/Models like say Kimi for example, Doesn't this lead to some big trust issues within the fabric of internet itself?

Personally, I don't feel like the whole website's AI but there are chances of some sneaky action happening at distance type of new accounts for sure which can be LLM's and we can be none the wiser.

All the same time that real accounts are gonna get questioned if they are LLM or not if they are new (my account is almost 2 years old fwiw and I got questioned by people esentially if this account is AI or not)

But what this does do however, is make people definitely lose a bit of trust between each other and definitely a little cautious towards each message that they read.

(This comment's a little too conspiratorial for my liking but I can't help but shake this feeling sometimes)

It just is all so weird for me sometimes, Idk but I guess that there's still an intuition between whose human and not and actually the HN link/article iteslf shows that most people who deploy AI on HN in newer accounts use standard models without much care which is the reason why em-dashes get detected and maybe are good detector for sometime/some-people and this could make the original OP's comment of intentionally having bad grammar to sound more human make sense too because em-dashes do have more probability of sounding AI than not :/

It's just this very weird situation and I am not sure how to explain where depending on from whatever situation you look at, you can be right.

You can try to hurt your grammar to sound more human and that would still be right

and you can try to be the way you are because you think that models can already have intentionally bad grammar too/capable of it and to have bad grammar isn't a benchmark itself for AI/not so you are gonna keep using good grammar and you are gonna be right too.

It's sort of like a paradox and I don't have any answers :/ Perhaps my suggestion right now feels to me to not overthink about it.

Because if both situations are right, then do whatever imo. Just be human yourself and then you can back down this statement with well truth that you are human even if you get called AI.

So I guess, TLDR: Speak good grammar or not intentionally, just write human and that's enough or that should be enough I guess.


I started making deliberate grammar and spelling mistakes in professional context. Not like I have a perfect writing anyway, but at least I could prove that it was self-written, not an auto-generated slop. (Could be self-written slop though :)

This applies not only work-stuff itself also to the job-applications/cv/resume and cover-letters.


unrelated but I've never understood how to put a smiley at the end of parenthetical sentences (which comes up surprisingly often for me since I use smileys a lot and also like using parentheses). Just the smiley as an end parentheses (like this :) feels off but adding another parentheses (like this :) ) makes it look like it should be nested which causes problems since I also tend to nest parenthetical sentences (like (this)).

Yes I enjoy lisp, how could you tell


The answer is obviously to balance your smiley faces and wrap the entire statement in the smiley face sentiment. ((: Like this :))


I like this simply for the absurdity of it, but will only use it when the entire parenthetical is modified by the smiley instead of a single word or phrase (:since I really like it:) but (it looks ugly, no hard feelings :) )


Ah, Spanish notation.


You have to invert the front one! (⸵ Of course, only noticable if it's a winky ;)

Turned Semicolon (U+2E35 ⸵)


That’s quite the Scheme…


Your comment made me realise that there's logic to this (like this :), since in HTML we can:

    <li> do this
    <li> and this
instead of: <li> ... </li>

and <img alt='this'> instead of <img ... />

You might like Lisp, but what you're saying reminds me of the late 00s/early 2010s xHTML2 vs. HTML5 debate :)


I'm an avid defender of xHTML. You can pry it from my cold dead hands


You monster.


Thanks, I hate it :)


Post C++11 you can just do (like this:)), no extra space needed before the last parenthesis.


But then it looks like I'm using a double smiley[0] which I do actually use on occasion

[0] :))


You could use a bracket in the smiley (like this :]) as is sometimes used when nesting parentheticals.


I sometimes do the opposite and use brackets for the parentheticals [like this :)]


I tend to rephrase myself so I dont end a statement inside a parenthesis with a smiley.

It's one of those things I think are worth putting some extra effort into, I'm glad to see at least one other person giving it some thought. Thx <3


Use dashes and the problem goes away! Well, you gain the LLM witch-hunt, but heh, no free lunch.


Synthetic example:

"Вот его, нет, не допустили (сама знаешь, почему)))"

My translation:

"But him - no, they didn't let him in (of course you know why :)"

When I went from texting friends in Russian or Ukrainian back to English, I missed right parentheses as a smiley; one or two - hi), hello)) - to me are like a smile, by ))) and )))) there's some laughing or some other joke going on. Native speakers could weigh in; my native tongue is English.


allow me to introduce my friend – turned smiley here he is: ´◡` (quite useful for brackets ´◡`)

you can find him on windows by pressing Win + ; not as fast as typing, but quite faster then typing and then wondering if thats too much brackets or too little


I love kaomoji so I use this on occasion but nothing can match the subtle passive aggressiveness and level of expression unmatched by anything else :)


You're really on to something there (-:


I’ve always been bothered by instances of your first example, and I mostly use “XD” instead of “:)” to sidestep the issue in my own writing.


The relevant XKCD: https://xkcd.com/541


I have the same problem. I just ditch the smiley face. :)


never >:(


Are you quoting someone doing a sad face or are you angry? ;)


This only works as "proof" up until someone innovates an "authenticity" flag on the LLM output.



tbh u can basically do this now lol... no flag needed.

if u want it to sound more real u just gotta tell the bot to write that way. like literally just ask it to throw in some typos or forget to capitalize stuff. or use slang and kinda ramble instead of being all robotic and organized.


I'm trademarking the improper use of it/it's, there/their/they're, were/we're, etc as a sign of my humanity. Apple's typocorrect is doing it for me anyways.


> I started making deliberate grammar and spelling mistakes in professional context.

I've also noticed an increase of this in myself and others, I used to edit a lot more before sending anything, but now it seems more authentic if you just hit send so it's more off the cuff with typos, broken sentences and all.

I'm sure an LLM could easily mimic this but it's not their default.


I’ve been doing the same thing. Basically a Turing test.


I appreciate you including a few minor mistakes in this very post:

> I started making deliberate grammar and spelling mistakes in professional context[s]. Not like I have ~a~ perfect writing anyway, but at least I could prove that it was self-written, not an auto-generated slop. (Could be self-written slop though :)

> This applies not only [to] work-stuff itself also to the job-applications/cv/resume and cover-letters.

I conclude you are real.


To me the OP read like a particular dialect of English which is quite common on HN, rather than being incorrect.


Imagine the delays are so prominent, someone decides to make a website for CTA (call-to-action) and semi-regularly shares updates on it...

I've been to Seattle once, (ex-Amazon here) where the DevCon was held in the town while my team was located in Bellevue. I took initiative to rent a bike for a day (60$ for drop-bar gravel bike) I must say although I did not beat the time between Day-1 (Office across spheres) and Bingo (Bellevue office), it was not far off. Even comparing the "Shuttles" Amazon operated, shuttle took about 1h while ride takes around 1h15m. (Plus sweat)

> P.S: I would say I am in a "fair" shape as I ride quite a lot throughout the year.


I was checking some of the bookmarks/reading-lists I had earlier, stumbled on this piece on the exe.dev's blog.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: