You want more “AI”? No? Well, too damn bad, here’s “AI” in your file manager.
With AI actions in File Explorer, you can interact more deeply with your files by right-clicking to quickly take actions like editing images or summarizing documents. Like with Click to Do, AI actions in File Explorer allow you to stay in your flow while leveraging the power of AI to take advantage of editing tools in apps or Copilot functionality without having to open your file. AI actions in File Explorer are easily accessible – to try out AI actions in File Explorer, just right-click on a file and you will see a new AI actions entry on the content menu that allows you to choose from available options for your file.
↫ Amanda Langowski and Brandon LeBlanc at the Windows Blogs
What, you don’t like it? There, “AI” that reads all your email and sifts through your Google Drive to barf up stunt, soulless replies.
Gmail’s smart replies, which suggest potential replies to your emails, will be able to pull information from your Gmail inbox and from your Google Drive and better match your tone and style, all with help from Gemini, the company announced at I/O.
↫ Jay Peters at The Verge
Ready to submit? No? Your browser now has “AI” integrated and will do your browsing for usyou.
Starting tomorrow, Gemini in Chrome will begin rolling out on desktop to Google AI Pro and Google AI Ultra subscribers in the U.S. who use English as their Chrome language on Windows and macOS. This first version allows you to easily ask Gemini to clarify complex information on any webpage you’re reading or summarize information. In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf.
↫ Josh Woodward
Mercy? You want mercy? You sure give up easily, but we’re not done yet. We destroyed internet search and now we’re replacing it with “AI”, and you will like it.
Announced today at Google I/O, AI Mode is now available to all US users. The focused version of Google Search distills results into AI-generated summaries with links to certain topics. Unlike AI Overviews, which appear above traditional search results, AI Mode is a dedicated interface where you interact almost exclusively with AI.
↫ Ben Schoon at 9To5Google
We’re going to assume control of your phone, too.
The technology powering Gemini Live’s camera and screen sharing is called Project Astra. It’s available as an Android app for trusted testers, and Google today unveiled agentic capabilities for Project Astra, including how it can control your Android phone.
↫ Abner Li at 9To5Google
And just to make sure our “AI” can control your phone, we’ll let it instruct developers how to make applications, too.
That’s precisely the problem Stitch aims to solve – Stitch is a new experiment from Google Labs that allows you to turn simple prompt and image inputs into complex UI designs and frontend code in minutes.
↫ Vincent Nallatamby, Arnaud Benard, and Sam El-Husseini
You are not needed. You will be replaced. Submit.
Linux has been my preferred desktop for many years now but I have never pushed it on others. I support a bunch of other people including family. I have never tried to influence them and most have used Windows. My wife uses macOS.
Recently, I have actively started to try to convert people to Linux and I have already moved a few. Part of it is the end of support for Windows 10 although that is more of an excuse or a lever to help convince people. My real motivation is the sense that Windows users are about to get overrun with AI and that this will be unpleasant or even risky for many users. I just moved my mother to LMDE a week or two ago. She is using Firefox and Thunderbird. These kinds of articles make me so happy that she is no longer a Windows user.
One can see the ambition to move the ui towards a “star treck computer”.
This is going to result in tons of products that the *sirius cybernetics corporation* would be proud of.
I think putting Genuine People Personality in everyday objects would drive me mad… but I can actually see that happening.
Thom, I think you need some help man…
We all do. All those of us that see how AI is a security and privacy liability AND an unnerving way to make us lose time (including when others use it to drown us in their byproducts).
The problem in my eyes is not AI per se but the abuse of it by companies trying to upsell beta products actually designed to siphon more data from our devices.
Wanting “AI sceptics” to get “helped” until they’re not critical of this abuse does have a dystopian vibe, though, don’t you think ?
worsehappens,
I think this depends. I dislike when user data is held & processed on corporate servers. This posses numerous security and privacy problems. I’m not convinced this is a side effect either, I think it’s an engineering goal “how can we take the user’s data and make them more dependent on us without giving away the fact that we want to do this?” AI pops in and says “hold my beer”. I am not a fan of this corporate tethering, which has already been normalized with android and iphone and it’s increasingly happening with desktop computers too 🙁
I actually think that running AI locally is really cool and I see a lot of potential for artistic domains, but unfortunately corporations like engineering things in such a way that keeps us dependent on them and we cannot turn them off, which is a dystopia to me.
Even AI online, like a simple ChatGPT has been very kind and helpful to me.
a) with ChatGPT I have been able to wrap advanced libUring calls (fixed file-descriptors, pipes) into a Java library using the new JDK 23 Foreign Function Interface. The results beats zero-copy Java NIO2 file-channel and I could never have come close to such a thing with normal StackOverflow. (I am just an accountant after all).
b) write business/project proposals and offers, starting with a clean slate? I find it much easier to correct ChatGPT’s drafts and change wording and prices only than to write all this myself from scratch
c) when responding to Audit Exceptions, Regulators and other drones, ChatGPT becomes a life and sanity saver.
Andreas Reichel,
Yes. Nearly anything that can be implemented locally can be moved into the cloud. I just have a longstanding gripe with cloud solutions that keep us tethered at the expense of local implementations under our control and keeps user data on our machines.
To provide some background, here are some examples that bother me: google printers that could not accept print jobs locally but had to be submitted through google data centers. This was infuriatingly bad engineering, but google designed it this way on purpose. Same deal with chromecast devices that can’t be controlled locally if the internet is out. It works and is useful to many users, but google intentionally sacrificed owner control and independence to force users to relinquish more private data to them. I have such a big problem with this, but it’s what every major tech company is trying to do. Even cars and home appliances are going in this direction. Some of our relatives have smart fridges. Does it need to phone home? No, but it does.
I bought a smart thermostat because I wanted to control the temperature from the phone. It’s nice to have and I though I was being smart about it by buying a model that explicitly had a local API and local control. If the servers shut down I can still use it. However low and behold the service did get discontinued and about a week after that the thermostat’s local functionality got cut off too. Despite trying to make an informed buying choice to buy hardware with local access, I still got screwed.
I feel the same way about AI as a service. It’s really neat to run LLMs and other generative AI tools locally, but I fear that over time it’s going to get harder to avoid being tethered to cloud crap I don’t want.
I appreciate your pragmatic response. AI is good when it’s good and bad when it’s bad — like all technologies, it’s neither good nor bad nor neutral.
For me it’s a godsend when I’m trying to sift through all the shite that content farms produce these days — articles that bury their dubiously valuable and likely incorrect lede in a bunch of boilerplate text that tells me “there are benefits to doing something beneficial, such as…” Likely a lot of it is already AI-generated (but it’s not supplanting any writing jobs that brought value to the internet). Faced with an ocean of useless prose, I really love just being able to ask an AI for a summary of what it’s found.
And local models — again, not just glitter being peddled by big tech hucksters, but super valuable in dealing with vast oceans of content. I just installed a free photo manager called Immich, and now I can drill down through my 130,000 photos and find that one that contained the coyote that killed a deer on my lawn. (You wouldn’t believe how many times I’ve search for that photo, found it, and promptly forgot where I found it.)
Everything else — either meh or yikes for me.
skeezix,
I agree. It seems some people are biased against ANY AI, but it’s just a tool, there are good applications and bad applications.
I expect that someone will come out with an AI shell for linux. I use the shell all the time, but sometimes I need to look up command line parameters iw/ip/dhclient/lvm/mdraid/etc. It seems to me that an LLM extensively trained on man pages might do a great job taking my request in english and converting it into the command line parameters and combinations I need.
Some examples…
“forward tcp port 876 to my raspberry pi server”
“list network port mappings on my router”
“list open tcp sockets on the laptop”
There could be two modes: run-automatically versus verify-then-run.
IMHO this could be a productive use of LLM and would be great osnews material too.
@Alfman — sorry for not replying in thread; guess WP doesn’t support this many levels deep cuz there’s no reply button.
But yeah, command-line AI is a brilliant idea and now I don’t think I can be happy until someone makes it. In the meantime, I find tldr ( https://tldr.sh ) really useful — simplified man pages, plenty of examples.
I think many AI use cases today are not worth. That feeds the current polarization of opinions.
underrated terse comment here I think you’re right.
There is a difference between being a sceptic and being disturbed by the development of AI, and I think Thom’s obsessive dismissal of the technology is not healthy. I am not being mean. I think there is value in having a balanced view.
It would be balanced if all this was opt in. It seems though that our input is not needed or wanted. Corporations seem to think that cramming AI in everything and the kitchen sink will somehow make their waning products relevant again.
Asking Thom what his thoughts on AI are is like asking an employee of a buggy-whip manufacturer their thoughts on those newfangled motorcar things. Even if they make some valid points, their position is inherently biased.
Thom was a translator before AI took his job, or in other words, the kind of clerical jobs AI is coming after.
That’s missing nuance. Thom is more like a craftsman whose woodworking and furniture making has largely been supplanted by mass production. The replacement product is superficially similar but drastically cheaper, but is actually missing details and sometimes so low quality that it can’t fulfill its function. Customers generally have a sense of what extra value they’re paying for with hand crafted furniture and expect flat-pack furniture to have a bad piece or two.
Buggy whip comparisons are often wrong, because the analogy would be the horses losing their job market.
Even if the flat-pack furniture has two bad pieces, you return the two bad pieces and they give you new pieces, the cost savings are worth it. And who cares about missing details? It’s extra fluff. Also, good companies have a couple of people at the end of the line to make sure those bad pieces or two don’t happen anyway.
This is how AI is eliminating clerical jobs: you have AI doing most of the work and a single person at the end proofreading everything.
kurkosdr,
I agree, this is how it happens and I think more and more people are going to become displaced by higher level automation, not only office jobs but things like truck driving too. The kinks are still being ironed out today, but long term the job insecurity is very real. I sympathize with people who loose their jobs and IMHO our social institutions are doing a very bad job prepare us for this. Not only are we ill prepared, but we’re ripping up the safety nets and social programs that help humans through times of difficulty. It is going to come back to bite us.
@kurkosdr:
It’s not “extra fluff” when it comes to translation, it’s being sued because a contract was mistranslated or someone dying because a doctor’s diagnosis was misunderstood, due to shitty AI translation. A human translator will be able to apply actual knowledge and experience to their work, they will be able to research — and know *what* to research — to ensure the original meaning is left intact. Current AI tech is sometimes unable to avoid hallucination in English, let alone translate correctly and properly with nuance and awareness of idioms and specific dialects that can have a huge impact on the results.
That’s why they have a human at the end proofreading everything. But the boring work is done by the AI.
@kurkosdr:
That’s some serious backpedaling, going from “AI can replace humans” to “AI needs humans to proofread”. You just invalidated your own point.
Morgan,
I don’t think it’s invalid. Most companies have a pyramid structure where new employees at the bottom require frequent oversight by higher level staff. Just because their work needs to be reviewed doesn’t mean they have no value. I think AI can be thought of the same way. They don’t have to be at “level 5” on day one to take “level 1” jobs.
I think it’s a mistake for us to be dismissive of AI for failing to be perfect because the immediate threat to our jobs is not perfect AI, but cheaper AI. A corporation might deem it financially sensible to replace imperfect error prone humans with imperfect error prone AI. People make me feel like a traitor for even saying it, but the employer’s calculus can still favor AI despite flaws. I don’t say this because I think it’s what’s best for society, I say it because it’s imperative that we understand the corporate picture and that AI doesn’t have to reach perfection before it displaces human jobs.
@Alfman:
I’m not saying you are wrong, because you aren’t wrong and AI in the workplace is a nuanced subject. I’m actually in full agreement with what you said.
My point was that @kurkosdr backpedaled from saying essentially “AI can do X now, so there is no longer a need for humans to do X”, to saying “oh yeah ha ha we still need humans to verify AI’s results for X”. And I think that’s a good thing! It’s always good when one can look at what they said before and realize they were wrong about it and admit it. I’ve had to eat my own crow more than once, for sure. It’s just that in this case I wanted to highlight that no, AI should NOT be doing certain tasks until it has proven itself *more* capable than a human. In the case of translation, AI is laughably bad at it and it’s a poor example to try to hold up as a retort to the article. The adoption of AI in the translation industry is taking jobs, yes, but it’s also inherently dangerous to those who need the translation results to be laser accurate.
In short, it’s not ready to handle any industry, and I am hugely skeptical of anyone who insists that it is.
Morgan,
Yeah I’m not really trying to take a position on whether AI is ready or not. I think it was last year when Thom posted an article about AI translation not being great. I do not doubt his claim, he is credible in my book and he would know. However, I’m not sure it matters that he’s right because it is the corporations who get to decide regardless. Even if we can prove they are wrong. it’s their money and their prerogative. They will go for profits over quality and their CEOs will be rewarded for it. Why would we expect anything else? Boeing engineers warned about dangerous shortcuts for years. They were proven right, but nobody listened and they lost.
I find that I don’t strongly disagree with most people’s opinions about AI in the present, however I seem to be in disagreement with those saying jobs are safe and/or predicting that AI will stop evolving. Today we are criticizing generic AI doing specialized tasks…ok, that’s fine, but as costs come down it’s only a matter of time before companies start to commission job specific bots and I think these are going to be a lot more competitive than something like chatgpt.
Unfortunately, it is all to easy to turn into the stereotypical grandpa screaming at clouds (pun intended) these days.
Not very healthy, I think. Not as harmful as what all this “ai” rigamarole is going to do with public mental health, but bad nonetheless.
Public mental health never recovered from the first cave paintings. Or at least some people back then thought.
When the printing press was introduced. Some people were certain it was going to set back education. As manuscript was how nature/god intended humans to read. Trains were supposed to make people hysterical, as the human brain was ill equipped to deal with speeds faster than a horse’s sprint. Etc. etc.
Alas, here we are.
What’s with the ad hominem attack? Where is your measured, thought out, rational rebuttal? Do you accuse anyone you disagree with of needing mental health services and then refuse to present your side? What are you actually trying to say here, or do you have nothing to say so you attack the person instead of the argument?
What did you hope to accomplish with your comment?
Yesterday I discovered the “MCP” online hype bubble. People really enthusiastic about the development of a standard way of allowing LLMs to control arbitrary applications on their system. I’ve been open minded about these buzzwords, but so far I haven’t been able to figure out a) an actual use case that I’d be interested in and b) what’s the point of all the confusing jargon when they’re really all just “LLM calls”.
AI is a fabulous control system. It’s not being built so it can help us plan our holidays, same way TV was not invented so we could watch football matches.
I can imagine similar articles in newspapers when factories started replacing manual workers. Some of the concerns were warranted, and we have issues because of factories to this day. But would you want to go back?
Even using modern technology at home, you can’t build an iPhone using a 3d printer.
Did you see NVIDIAs recent keynote? The pitch was that humanoid robots were the only really viable choice for robotics because only robots that replace humans had enough economies of scale to really take-off. And to make a robot that can do all the tasks we might want a human to do takes insane levels of AI. So, please buy unbelievably ridiculous amounts of AI from NVIDIA to make robots to replace humans for every commercially productive task. It was quite the pitch.
It’s all extra obnoxious for the politics surrounding it – not luxury AI space communism, but economic conservatism and austerity. Fire all workers! No UBI, no welfare, no public healthcare – only fire all workers! The subtext nobody wants to talk about is that the logical result of replacing all workers with robots, while also gutting the welfare state that provides for the unemployed, is mass death.
Of course, if you’ve been watching how the billionaire class reacts to global warming, you know that’s a foregone conclusion. These people truly, deeply, wholeheartedly do not care about us. The AI boom, like the bunker boom, is about selling them a future where they don’t need the inconvenience of human workers with human rights – where they can just let us die.
Just use AI to put the big tech companies out of business. They want to automate our jobs, but their jobs are the ones most vulnerable to automation.