I find that this happens when you enter folders that have media files like audio files, video files and so on. One way to fix it is to enter one such folder, then remove all columns (like file name, date modified - those columns) and remove all the columns that are media metadata columns. Things like track length, artist, contributing artist or whatever else, then click in the File explorer menu on the 3 dots icon (**) and select View tab, then click 'Apply to folders'. This will apply the column and view settings that you just applied to all such folders.
Now all folders with media files open immediately. Also if you want no wait for video files folders, right click in the folder and select 'View -> Details or View -> List or some other option where it doesn't create thumbnails and it'll load even quicker.
> remove all columns (like file name, date modified - those columns) and remove all the columns that are media metadata columns [...] click in the File explorer menu on the 3 dots icon (*) and select View tab, then click 'Apply to folders' [...] click in the folder and select 'View -> Details or View -> List or some other option
I'm sorry, this is very funny to me in the context of the person upthread arguing about how great "agentic OSes" are. Some people seem to believe that we're living in the future, but I'm pretty sure we're still stuck in Windows '95.
It's not just media files. I'm forced to use Windows 11 on my work PC, and I had to disable the new shell extensions to make the file explorer usable again. It's noticeably faster without the new UI.
Looking up media details is of course one of the main reasons. Thank you for sharing this information. However, all the folders are already configured as general folders and this one specifically has a bunch of PDF files.
When such basic tasks are failing spectacularly, nobody can have any confidence that complex things can be achieved reliably. Instead of spying on their users and trying to squeeze more and more money from them, they should first focus on making a great product and work on making it better, not researching ways to enshitify things.
I feel like I kind of borked the last paragraph so I want to clarify something.
The point is basically that since these repeating patterns are different every time, they are not emergent. They don't really "exist" except as matter repeating itself in a similar way. Emergence implies there is some kind of different qualitative difference between the emergent level and the lower level but I would argue there isn't.
Even though I think it's true that it's lossy, I think there is more going on in an LLM neural net. Namely that when it uses tokens to produce output, you essentially split the text into millions or billions of chunks, each with probability of those chunks. So in essence the LLM can do a form of pattern recognition where the patterns are the chunks and it also enables basic operations on those chunks.
That's why I think you can work iteratively on code and change parts of the code while keeping others, because the code gets chunked and "probabilitized'. It can also do semantic processing and understanding where it can apply knowledge about one topic (like 'swimming') to another topic (like a 'swimming spaceship', it then generates text about what a swimming spaceship would be which is not in the dataset). It chunks it into patterns of probability and then combines them based on probability. I do think this is a lossy process though which sucks.
Maybe it's looked down upon to complain about downvotes but I have to say I'm a little disappointed that there is a downvote with no accompanying post to explain that vote, especially to a post that is factually correct and nothing obviously wrong with it.
LLMs _can_ think top-to-bottom but only if you make them think about concrete symbol based problems. Like this one: https://chatgpt.com/s/t_692d55a38e2c8191a942ef2689eb4f5a
The prompt I used was "write out the character 'R' in ascii art using exactly 62 # for the R and 91 Q characters to surround it with"
Here it has a top down goal of keeping the exact amount of #'s and Q's and it does keep it in the output. The purpose of this is to make it produce the asciii art in a step by step manner instead of fetching a premade ascii art from training data.
What it does not reason well about always are abstract problems like the doctor example in the post.
The real key for reasoning IMO is the ability to decompose the text into a set of components, then apply world model knowledge to those components, then having the ability to manipulate those components based on what they represent.
Humans have an associative memory so when we read a word like "doctor", our brain gathers the world knowledge about that word automatically. It's kind of hard to tell exactly what world knowledge the LLM has vs doesn't have, but it seems like it's doing some kind of segmentation of words, sentences and paragraphs based on the likelihood of those patterns in the training data, and then it can do _some_ manipulation on those patterns based on other likelihood of those patterns.
Like for example if there is a lot of text talking about what a doctor is, then that produces a probability distribution about what a doctor is, which it then can use in other prompts relating to doctors. But I have seen this fail before as all of this knowledge is not combined into one world model but rather purely based on the prompt and the probabilities associated with that prompt. It can contradict itself in other words.
I think something that's missing from AI is the ability humans have to combine and think about ANY sequence of patterns as much as we want.
A simple example is say I think about a sequence of "banana - car - dog - house". I can if I want to in my mind, replace car with tree, then replace tree with rainbow, then replace rainbow with something else, etc... I can sit and think about random nonsense for as long as I want and create these endless sequences of thoughts.
Now I think when we're trying to reason about a practical problem or whatever, maybe we are doing pattern recognition via probability and so on, and for a lot of things it works OK to just do pattern recognition, for AI as well.
But I'm not sure that pattern recognition and probability works for creating novel interesting ideas all of the time, and I think that humans can create these endless sequences, we stumble upon ideas that are good, whereas an AI can only see the patterns that are in its data. If it can create a pattern that is not in the data and then recognize that pattern as novel or interesting in some way, it would still lack the flexibility of humans I think, but it would be interesting nevertheless.
one possible counter-argument: can you say for sure how your brain is creating those replacement words? When you replace tree with rainbow, does rainbow come to mind because of an unconscious neural mapping between both words and "forest"?
It's entirely possible that our brains are complex pattern matchers, not all that different than an LLM.
That's a good point and I agree. I'm not a neuroscientist but from what I understand the brain has an associative memory so most likely those patterns we create are associatively connected in the brain.
But I think there is a difference between having an associative memory, and having the capacity to _traverse_ that memory in working memory (conscious thinking). While any particular short sequence of thoughts will be associated in memory, we can still overcome that somewhat by thinking for a long time. I can for example iterate on the sequence in my initial post and make it novel by writing down more and more disparate concepts and deleting the concepts that are closely associated. This will in the end create a more novel sequence that is not associated in my brain I think.
I also think there is the trouble of generating and detecting novel patterns. We know for example that it's not just low probability patterns. There are billions of unique low probability sequences of patterns that have no inherent meaning, so uniqueness itself is not enough to detect them. So how does the brain decide that something is interesting? I do not know.
>I can for example iterate on the sequence in my initial post and make it novel by writing down more and more disparate concepts and deleting the concepts that are closely associated. This will in the end create a more novel sequence that is not associated in my brain I think.
This seems like something that LLMs can do pretty easily via CoT.
As a fun test, I asked ChatGPT to reflexively given me four random words that are not connected to each other without thinking. It provided: lantern, pistachio, orbit, thimble
I then asked it to think carefully about whether there were any hidden relations between them, and to make any changes or substitutions to improve the randomness.
No, the new algorithms used to be determine this was created by ICM-CSIC who are also the publishers of this article.
Also the authors of the paper is involved with the article, there is for example this quote:
“We are witnessing a true reversal of ocean circulation in the Southern Hemisphere—something we’ve never seen before,” explains Antonio Turiel, ICM-CSIC researcher and co-author of the study.
I guess so too... but whatever it is: it cannot possibly be something algorithmic. Therefore it doesn't matter in terms of demonstrating that AI has a boundary there, that cannot be transcended by tech, compute, training, data etc.
Why can't it be algorithmic?
If the brain uses the same process on all information, then that is an algorithmic process. There is some evidence that it does do the same process to do things like consolidating information, processing the "world model" and so on.
Some processes are undoubtedly learned from experience but considering people seem to think many of the same things and are similar in many ways it remains to be seen whether the most important parts are learned rather than innate from birth.
Explain what you mean by "algorithm" and "algorithmic". Be very precise. You are using this vague word to hinge on your entire argument and it is necessary you explain first what it means. Since from reading your replies here it is clear you are laboring under a defitnition of "algorithm" quite different from the accepted one.
Why do you think humans are capable of doing anything that isn't algoritmic?
This statement, and your lack of mention of the Church-Turing thesis in your papers suggests you're using a non-standard definition of "algoritmic", and your argument rests on it.
This paper is about the limits in current systems.
Ai currently has issues with seeing what's missing. Seeing the negative space.
When dealing with complex codebases you are newly exposed to you tackle an issue from multiple angles. You look at things from data structures, code execution paths, basically humans clearly have some pressure to go, fuck, I think I lost the plot, and then approach it from another paradigm or try to narrow scope, or based on the increased information the ability to isolate the core place edits need to be made to achieve something.
Basically the ability to say, "this has stopped making sense" and stop or change approach.
Also, we clearly do path exploration and semantic compression in our sleep.
We also have the ability to transliterate data between semantic to visual structures, time series, light algorithms (but not exponential algorithms, we have a known blindspot there).
Humans are better at seeing what's missing, better at not closuring, better at reducing scope using many different approaches and because we operate in linear time and there are a lot of very different agents we collectively nibble away at complex problems over time.
I mean on a 1:1 teleomere basis, due to structural differences people can be as low as 93% similar genetically.
We also have different brain structures, I assume they don't all function on a single algorithmic substrate, visual reasoning about words, semantic reasoning about colors, synesthesia, the weird handoff between hemispheres, parts of our brain that handle logic better, parts of our brain that handle illogic better. We can introspect on our own semantic saturation, we can introspect that we've lost the plot. We get weird feelings when something seems missing logically, we can dive on that part and then zoom back out.
There's a whole bunch of shit the brain does because it has a plurality of structures to handle different types of data processing and even then the message type used seems flexible enough that you can shove word data into a visual processor part and see what falls out, and this happens without us thinking about it explicitly.
Now all folders with media files open immediately. Also if you want no wait for video files folders, right click in the folder and select 'View -> Details or View -> List or some other option where it doesn't create thumbnails and it'll load even quicker.