Il CEO di Nvidia: “il divario con la Cina è di pochi nanosecondi”. Ed è polemica
Il 25 settembre, durante il programma di interviste Bg2 Pod, il CEO di Nvidia Jen-Hsun Huang ha espresso posizioni che hanno alimentato un acceso dibattito pubblico. Nel corso dell’intervento, Huang ha difeso il sistema economico cinese, ha lodato la cultura del lavoro definita “996” e ha definito i cosiddetti “falchi cinesi” non un titolo d’onore, bensì un “marchio di vergogna”.
Rapporto tra Stati Uniti e Cina
Huang ha ricordato di aver convinto in passato Donald Trump a rimuovere il divieto di vendita dei chip Nvidia H20 alla Cina, a fronte però della richiesta di una tassa del 15% sulle esportazioni. Oggi la situazione è cambiata: Pechino ha risposto alle restrizioni statunitensi con un blocco sulla vendita dei chip Nvidia. Commentando la vicenda, il CEO ha affermato:
«Siamo in una relazione di concorrenza con la Cina. È naturale che vogliano far crescere le proprie aziende, e non ho alcuna obiezione».
Secondo Huang, la forza della Cina risiede nella qualità dei suoi imprenditori e nella motivazione dei suoi lavoratori, molti dei quali provenienti dalle principali università scientifiche e ingegneristiche.
Ha citato come esempio il modello “996” – lavoro dalle 9 alle 21 per sei giorni a settimana – che, a suo avviso, ha contribuito alla formazione del maggior numero di ingegneri di intelligenza artificiale al mondo.
Innovazione e sistema cinese
Huang ha respinto l’idea che la Cina non sia in grado di produrre chip per l’intelligenza artificiale o di eccellere nella manifattura. «Chi afferma che siano indietro di due o tre anni si sbaglia: il divario è nell’ordine di pochi nanosecondi», ha dichiarato.
Ha inoltre sottolineato che l’economia cinese, contrariamente alla percezione comune di forte centralizzazione, è caratterizzata da un sistema competitivo e decentralizzato, in cui le 33 province e municipalità si sfidano tra loro generando dinamismo e spirito imprenditoriale.
Posizione sugli Stati Uniti e politica dei visti
Parlando della tecnologia americana, Huang ha ribadito che gli Stati Uniti devono valorizzare al massimo il proprio settore tecnologico, definito “tesoro nazionale”. Ha invitato a favorire la diffusione globale della tecnologia statunitense, per rafforzarne il peso economico e geopolitico.
Il CEO ha anche commentato la nuova politica di Trump sui visti H-1B, che prevede un costo di 100.000 dollari per ogni domanda. Si tratta di un visto di lavoro non-immigrante che permette alle aziende statunitensi di assumere lavoratori stranieri con competenze specializzate in settori come scienza, ingegneria e tecnologia, e che hanno un titolo di studio universitario o equivalente.
Pur ritenendo la cifra elevata, l’ha definita un “buon inizio” per ridurre gli abusi del sistema, distinguendo chiaramente tra immigrazione legale e illegale.
I “falchi cinesi”
Huang ha dichiarato di aver appreso solo di recente il termine “falchi cinesi”, spesso usato come simbolo di patriottismo. Ha però ribaltato il concetto: «Non è un distintivo d’onore, è un distintivo di vergogna». Secondo il CEO, sostenere posizioni estremiste contro la Cina non rappresenta un atto patriottico.
Ha inoltre affermato che gli Stati Uniti devono agire con fiducia da grande potenza: «Se altri vogliono competere con noi, che vengano pure. Non c’è dubbio che Trump sia il presidente che dice ‘facciamolo’».
Durante l’intervista, Huang ha lodato il linguaggio usato da Trump nei confronti della Cina, sottolineando come l’ex presidente non abbia mai parlato di “disaccoppiamento“. «È un concetto sbagliato: le due principali economie mondiali non possono separarsi», ha spiegato.
Precedenti dichiarazioni pro-Cina
Non è la prima volta che il fondatore di Nvidia si esprime a favore della Cina. Nel luglio 2025, durante una visita ufficiale, Huang ha tenuto un discorso in cinese ed elogiato undici aziende locali per le loro innovazioni.
Ha inoltre manifestato l’intenzione di acquistare un’auto prodotta da Xiaomi, definendo un “peccato” che non fosse disponibile sul mercato statunitense, e ha previsto che i chip di intelligenza artificiale di Huawei finiranno per sostituire quelli di Nvidia.
Prima del viaggio, un gruppo bipartisan di senatori statunitensi aveva già invitato Huang a evitare contatti con società cinesi legate all’esercito e all’intelligence, nonché con quelle soggette a restrizioni sulle esportazioni di semiconduttori.
L'articolo Il CEO di Nvidia: “il divario con la Cina è di pochi nanosecondi”. Ed è polemica proviene da il blog della sicurezza informatica.
LLM Dialogue In Animal Crossing Actually Works Very Well
In the original Animal Crossing from 2001, players are able to interact with a huge cast of quirky characters, all with different interests and personalities. But after you’ve played the game for awhile, the scripted interactions can become a bit monotonous. Seeing an opportunity to improve the experience, [josh] decided to put a Large Language Model (LLM) in charge of these interactions. Now when the player chats with other characters in the game, the dialogue is a lot more engaging, relevant, and sometimes just plain funny.
How does one go about hooking a modern LLM into a 24-year-old game built for an entirely offline console? [josh]’s clever approach required a lot of poking about, and did a good job of leveraging some of the game’s built-in features for a seamless result.
In addition to distinct personalities, villagers have a small shared “gossip” memory.
The game runs on a GameCube emulator, and the first thing needed is a way to allow the game and an external process to communicate with each other. To do this, [josh] uses a modding technique called Inter-Process Communication (IPC) via shared memory. This essentially defines a range of otherwise unused memory as a mailbox that both the game state and an external process (like a Python script) can access.
[josh] then nailed down the exact memory locations involved in dialogue. This was a painstaking process that required a lot of memory scanning, but eventually [josh] found where the game stores the active speaker and the active dialogue text when the player speaks to a villager. That wasn’t all, though. The dialogue isn’t just plain ASCII, it contains proprietary control codes that sprinkle things like sounds, colors, and speaker emotes into conversations.
The system therefore watches for dialogue, and when a conversation is detected, the “Writer” LLM — furnished with all necessary details via the shared memory mailbox — is asked to create relevant dialogue for the character in question. A second “Director” LLM takes care of adding colors, facial expressions, and things of that nature via control codes.
[josh] even added a small bit of shared “gossip” memory among all villagers which keeps track of who said what to who, and how they felt about it. This perhaps unsurprisingly results in a lot of villagers grumbling about just how much currency flows directly to Tom Nook, the raccoon proprietor of the local store.
A very clever detail pointed out by [Simon Willison] is how [josh] deals with the problem of the game expecting dialogue to be immediately available at the given memory location. After all, LLMs don’t work instantly. Turns out [josh]’s code makes clever use of a built-in dialogue control code that creates a short pause. Whenever a dialogue screen opens, a few short pauses ensure that the LLM’s work is done in time.
If Animal Crossing isn’t retro enough, or you prefer your LLMs to be a little more excitable, AI commentary for Pong is totally a thing.
youtube.com/embed/7AyEzA5ziE0?…
2025 Hackaday Speakers, Round One! And Spoilers
Supercon is the Ultimate Hardware Conference and you need to be there! Just check out this roster of talks that will be going down. We’ve got something for everyone out there in the Hackday universe, from poking at pins, to making things beautiful, to robots, radios, and FPGAs. And this isn’t even half of the list yet.
We’ve got a great mix of old favorites and new faces this year, and as good as they are, honestly the talks are only half of the fun. The badge hacking, the food, the brainstorming, and just the socializing with the geekiest of the geeky, make it an event you won’t want to miss. If you don’t have tickets yet, you can still get them here.
Plus, this year, because Friday night is Halloween, we’ll be hosting a Sci-Fi-themed costume party for those who want to show off their best props or most elaborate spacesuits. And if that is the sort of thing that you’re into, you will absolutely want to stay tuned to our Keynote Speaker(s) announcement in a little while. (Spoiler number one.)
Joe FitzPatrick
Probing Pins for Protocol Polyglots
This talk explores stacking multiple protocols, like UART, SPI, and I2C, onto the same GPIO pins by exploiting undefined “don’t care” regions. Learn how to bitbang several devices at once, creating protocol polyglots without extra hardware.
Elli Furedy
Sandbox Systems: Hardware for Emergent Games
From Conway’s Game of Life to cyberpunk bounty hunting in the desert, this talk explores how thoughtful design in tech and hardware can lead to human connection and community. Elli Furedy shares lessons from years of building hardware and running an immersive experience at the event Neotropolis.
Andrew [Cprossu] Lewton
Cracking Open a Classic DOS Game
Take a nostalgic and technical deep dive into The Lawnmower Man, a quirky full-motion video game for DOS CD-ROM. We’ll explore the tools and techniques used to reverse-engineer the game, uncover how it was built, and wrap things up with a live demo on original hardware.
Reid Sox-Harris
Beyond RGB: The Illuminating World of Color & LEDs
RGB lighting is everywhere and allows any project to display millions of unique colors. This talk explores the physiology of the human eye that allows RGB to be so effective, when alternatives are better, and how to choose the right lighting for your project.
Cyril Engmann
What Makes a Robot Feel Alive?
This talk dives into the art and engineering of programming personality into pet robots, crafting behaviors, reactions, and quirks that turns a pile of parts into a companion with presence. Learn design tips, technical insights, and lessons from building expressive bots that blur the line between hardware and character.
Artem Makarov
Hacked in Translation: Reverse Engineering Abandoned IoT Hardware
This talk takes us on a tour of adventures reviving an abandoned IoT “AI” translator, 2025-style. From decoding peculiar protocols to reverse engineering firmware & software, discover how curiosity and persistence can breathe new life into forgotten hardware and tackle obscure technical challenges.
Samy Kamkar
Optical Espionage: Lasers to Keystrokes
We’ll learn how to identify what a target is typing from a distance through a window with an advanced laser microphone capable of converting infrared to vibrations to radio back to sound, and the electrical, optical, radio, and software components needed for cutting-edge eavesdropping.
Zachary Peterson
Cal Poly NerdFlare: Bringing #badgelife to Academia
A small experiment with PCB art and interactive badges became a campus-wide creative movement. Hear how students combined art, technology, and real-world tools to build community, develop skills, and create projects that are as accessible as they are unforgettable.
Javier de la Torre
Off the Grid, On the Net: Exploring Ham Radio Mesh Networks.
This talk dives into using outdoor wireless access points to join a ham radio mesh network (ham net). Learn how services like weather stations, video streams, email, and VOIP are run entirely over the mesh, without needing commercial internet, all within FCC Part 97 rules.
Debra Ansell
LEDs Get Into Formation: Mechanically Interesting PCB Assemblies
This talk discusses a range of projects built from custom LED PCBs combined into two and three dimensional structures. Explores methods of connecting them into creative arrangements, both static and flexible, including the “Bendy SAO” which won a prize at Supercon 2024.
Jeremy Hong
Rad Reverb: Cooking FPGAs with Gamma Rays
This talk presents research on destructive testing of commercial off-the-shelf (CoTS) FPGAs using cobalt-60 and cesium-137 radiation to study failure modes and resilience in high-radiation environments. Learn about a novel in-situ measurement method that allows real-time observation of integrated circuits during exposure, capturing transient faults and degradation without interrupting operation.
Doug Goodwin
Aurora Blue
Earth’s magnetic field is glitching out. Phones fail, satellites drop, auroras flood the skies. This talk dives into Aurora Blue, which imagines this future through post-digital imaging hacks: cyanotype prints exposed by custom light-field instruments that flow like auroras. Deep-blue works built to endure, sky relics you can hold after the cloud crashes.
Workshop News, and another Spoiler
Sadly, we’ve got to announce that the Meshtastic workshop with Kody Kinzie will not be taking place. But Spoiler Number Two is that the badge this year will have all of the capabilities of that project and much, much more. If you’re into LoRA radio, meshes, and handheld devices, you’ll want to watch out for our badge reveal in the upcoming weeks.
Oh, and go get your tickets now before it’s too late. Supercon has sold out every year, so you can’t say that we didn’t tell you.
A Trail Camera Built With Raspberry Pi
You can get all kinds of great wildlife footage if you trek out into the woods with a camera, but it can be tough to stay awake all night. However, this is a task you can readily automate, as [Luke] did with his DIY trail camera.
A Raspberry Pi Zero 2W serves as the heart of the build. It’s compact and runs on very little power, but also provides a good amount more processing power than the original Raspberry Pi Zero. It’s kitted out with the Raspberry Pi AI Camera, which uses the Sony IMX500 Intelligent Vision Sensor — providing a great platform for neural networks doing image classification and similar machine learning tasks. A Witty Pi power management module is used both for its real time clock and to schedule start-ups and shutdowns to best manage the power on offer from the batteries. All these components are wrapped up in a 3D printed housing to keep the Pi safe out in the wild.
We’ve seen some neat projects in this vein before.
youtube.com/embed/qhY_3XCSYsM?…
A Cut Above: Surgery in Space, Now and In the Future
In case you hadn’t noticed, we live in a dangerous world. While our soft, fleshy selves are remarkably good at absorbing kinetic energy and healing the damage that results, there are very definite limits to what we humans can deal with, beyond which we’ll need some help. Car crashes, falls from height, or even penetrating trauma such as gunshot wounds — events such as these will often land you in a trauma center where, if things are desperate enough, you’ll be on the operating table within the so-called “Golden Hour” of maximum survivability, to patch the holes and plug the leaks.
While the Golden Hour may be less of a hard limit than the name implies, it remains true that the sooner someone with a major traumatic injury gets into surgery, the better their chances of survival. Here on planet Earth, most urban locations can support one or more Level 1 trauma centers, putting huge swathes of the population within that 60-minute goal. Even in rural areas, EMS systems with Advanced Life Support crews can stabilize the severely wounded until they can be evacuated to a trauma center by helicopter, putting even more of the population within this protective bubble.
But ironically, residents in the highest-priced neighborhood in human history enjoy no such luxury. Despite only being the equivalent of a quick helicopter ride away, the astronauts and cosmonauts aboard the International Space Station are pretty much on their own when it comes to any traumatic injuries or medical emergencies that might crop up in orbit. While the ISS crews are well-prepared for that eventuality, as we’ll see, there’s only so much we can do right now, and we have a long way to go before we’re ready to perform surgery in space
Stacking the Deck
In the relatively short time that humans have been going to space, we’ve been remarkably lucky in terms of medical emergencies. Except for the incidents resulting in total loss of ship and crew, on-orbit medical events tend to be few and far between, and when they do occur, they tend to be minor, such as cuts, abrasions, nasal congestion, and “space adaptation syndrome,” a catch-all category of issues related to getting used to weightlessness. On the more serious end of the spectrum are several cases of cardiac arrhythmias, none of which required interventions or resulted in casualties.
There are a few reasons why medical incidents in space have been so few and far between. Chief among these is the stringent selection process for astronauts and cosmonauts, which tends to weed out anyone with underlying problems that might jeopardize a mission. This means that everyone who goes to space tends to be remarkably fit, which reduces the chance of anything untoward happening in orbit. Pre-flight quarantines are also used to keep astronauts from bringing infectious diseases up to orbit, where close quarters could result in rapid transmission between crew members.
Also, once these extremely fit individuals get to orbit, they’re among the most closely medically monitored people in history. Astronauts of the early Space Race programs and into the Shuttle program days were heavily instrumented, with flight surgeons constantly measuring just about every medical parameter engineers could dream up a sensor for. Continuous monitoring of crew vital signs isn’t really done much anymore, unless it’s for a particular on-orbit medical study, but astronauts are still better monitored than the average Joe walking around on the ground, and that offers the potential to pick up on potential problems early and intervene before they become mission-threatening issues.
Strangely enough, all this preoccupation with mitigating medical risks doesn’t appear to include the one precaution you’d think would be a no-brainer: preflight prophylactic appendectomy. While certain terrestrial adventures, such as overwintering in Antarctica, require the removal of the appendix, the operation isn’t mandated for astronauts and cosmonauts, probably due to the logic that anyone with a propensity toward intestinal illness will likely be screened out of the program before it becomes an issue. Also, even routine surgery like an appendectomy carries the risk of surgical complications like abdominal adhesions. This presents the risk of intestinal obstruction, which could be life-threatening if it crops up in orbit.
Mechanisms of Injury
Down here on Earth, we have a lot of room to get into trouble. We’ve got stairs to fall down, rugs to trip over, cars to crash, and through it all, that pesky acceleration vector threatening to impart enough kinetic energy to damage our fragile shelves. In the cozy confines of the ISS or any of the spacecraft used to service it, though, it’s hard to get going fast enough to do any real damage. Also, the lack of acceleration — most of the time — eliminates the risk of falling and hitting something, one of the most common mechanisms of injury here on Earth.
youtube.com/embed/d1iO-yDp_nA?…
Still, space is a dangerous place, and there is an increasing amount of space debris with the potential to cause injuries. Even with ballistic shielding on the ISS hull and micrometeoroid protection built into EVA suits, penetrating trauma is still possible. Blunt-force trauma is a concern as well, particularly during extravehicular activities where astronauts might be required to handle large pieces of equipment; even in free-fall, big things are dangerous to be around. Bones tend to demineralize during extended spaceflights, too, meaning an EVA could result in a fracture. EVAs can also present cardiac risks, with the stress of spacewalking potentially triggering an undetected and potentially serious arrhythmia.Advanced Diagnostic Ultrasound in Microgravity (ADUM) is currently the only medical imaging modality available on the ISS. Source: NASA
Another underappreciated risk of spaceflight is urological problems. Fred Haise, lunar module pilot for the doomed Apollo 13 mission, famously developed a severe urinary tract infection due to the stress and dehydration of the crew’s long, cold return to Earth. Even in routine spaceflights, maintaining adequate hydration is difficult; coupled with excessive urination caused by the redistribution of fluids and increased excretion of calcium secondary to bone demineralization, kidney stones are a real risk.
Kidney stones aren’t just a potential problem; they have happened. A cosmonaut, reportedly Anatoly Solovyev, developed symptomatic kidney stones during a Mir mission in the 1990s. Luckily, he was able to continue the mission with just fluids and pain medications, but kidney stones can be excruciatingly painful and completely debilitating, and should a stone cause an obstruction and urinary retention, it could require surgery to resolve.
The Vertical Ambulance Ride
Given all these potential medical risks, is the ISS equipped for surgical interventions? In a word: no. While ISS crew members undergo extensive medical training, and the station’s medical kit is well-stocked, no allowance has been made for even the simplest of surgical procedures in orbit. The reasoning is simple: with at least one Soyuz or Dragon capsule berthed at the station at all times and a small, low-risk population aboard, the safest approach to a major medical issue is to evacuate the patient back to Earth.
That’s easier said than done, of course. Launching a Soyuz or Crew Dragon from the ISS takes a minimum of three to six hours, and potentially longer if a severely injured astronaut cannot easily don the required pressure suit. Recovery time once the capsule lands could be prolonged for an unplanned lifeboat return; adding in transport time to a medical facility, it could be six hours or more before advanced treatment can begin.
To make sure the astronaut survives what amounts to a protracted and very expensive ambulance ride, the crew will attempt to stabilize the patient as best as possible. The designated crew medical officer (CMO) has training in starting IVs, performing endotracheal intubation, and even thoracocentesis, or the placement of a chest tube. On top of the medications available in the station med kit and with help from flight surgeons on the ground, the crew should be able to stabilize the patient well enough for the ride home.
Practice Makes Perfect
Obviously, though, the medevac strategy only works if the accident occurs close to Earth. As we push crewed missions deeper into space, evacuation will likely be off the table, and even with a crew carefully curated for extreme fitness, eventually the law of averages will catch up to us, and it will become necessary to perform surgery in space. And even though that first space surgery will likely be performed under emergent conditions, probably by an untrained crew, that doesn’t mean future space surgeons will be flying completely blind.
Back in 2016, a multidisciplinary group in Canada undertook a unique comparative study of simulated surgery under weightless conditions. Using a Dassault Falcon 20 Research Aircraft — essentially Canada’s version of NASA’s famous “Vomit Comet” — a team of ten surgeons took turns performing a common trauma procedure: surgical hemorrhage control of an exsanguinating liver laceration. Such an injury could easily occur in space, either through blunt-force or penetrating trauma, especially on a mission that would include any sort of construction tasks.
The goal of the trial was to compare simulated blood loss between surgery performed in zero-g conditions and the same operation performed on the ground. A surgical simulator called a “Cut Suit,” which looks and acts like a human torso, was secured to a makeshift surgical table in the cramped confines of the Falcon — a good simulation of what will likely be the cramped quarters of any future interplanetary spacecraft. The surgeon and an assistant were secured in a kneeling position in front of the simulator using bungee cords, along with a technician charged with maintaining a simulated blood pressure of 80 mm Hg in the Cut Suit.
For the zero-g surgery, the Falcon flew parabolic paths that resulted in 20-second bursts of weightlessness. All airborne surgical tasks were performed only during weightlessness; for the 1-g operation, which was performed with the same aircraft parked in a hangar, the surgeons were limited to 20-second work windows at the same cadence as the zero-g surgery. The surgeries were extensively documented with video cameras for post-surgical review and corroboration with simulated blood flow measurements during the procedures.
The results were surprisingly good. All ten surgeries were completed successfully, although two surgeons had to tap out of the final closing task to keep from vomiting into the surgical field. Although all surgeons reported that the zero-g surgery was subjectively harder, objective results, such as blood loss and time needed to complete each surgical task, were all at least slightly better at zero-g than 1-g. It needs to be stressed that even for simulations, these were simplified surgeries, perhaps overly so. There was no attempt at infection control; no draping of the patient or disinfection of the field, no gowning or scrubbing, and no aseptic procedure while handling of instruments. Also, there was no simulated anesthesia, a critical step in the procedure. But still, it suggests that the basic mechanics of one kind of surgery could be manageable under deep-space conditions.Simulating space surgery aboard NASA’s “Vomit Comet.” This study from the University of Kentucky Louisville aims to develop tools and techniques to make space surgery possible. Source: Seeker
Aside from testing more realistic surgical procedures under zero-g, more testing will be required to see what weightless post-op and recovery look like. The operation selected for the trial was somewhat incomplete because packing a liver wound isn’t really an endpoint in itself, but more of a stop along the way to recovery. Packing is just what it sounds like — absorbent material packed around the wound to staunch the flow of blood and to provide some direct pressure to allow blood to clot so the wound can heal naturally. The packing material will have to be removed eventually, and while it’s possible to remove it via surgical drains placed during the packing operation, it’s more likely that another open-field or at least a laparoscopic operation will be needed to take the packing material out and tidy up any wounds that haven’t healed by themselves.
The placement of surgical drains also brings up another problem of zero-g surgery. In terrestrial surgery, drains are generally placed in locations where blood and fluids are expected to pool. For the liver packing example, drains would generally be placed posterior to the liver, since the patient would be lying in bed during recovery and the blood would tend to pool at the back of the peritoneal cavity. In space, though, how those fluids would be removed is an open question. Exploring that question might be difficult; since recovery takes days or even weeks, it would be hard to simulate in 20-second bursts. Artificial gravity might help with wound drainage, but the effects of the Coriolis force on the healing process would have to be explored, too.
Given that we’ve been doing surgery here on earth for thousands of years, it’s surprising to have question marks for doing exactly the same things in microgravity. But for surgery, space still remains the final frontier.
Microsoft lancia Agent Mode in Excel e Word! Meno formule e più intelligenza artificiale
Microsoft ha lanciato Agent Mode, una funzionalità basata sull’intelligenza artificiale in Excel e Word che crea automaticamente fogli di calcolo e documenti di testo complessi con una singola query di testo.
Copilot Chat ha anche lanciato Office Agent, basato su modelli Anthropic, che consente agli utenti di creare rapidamente presentazioni PowerPoint e documenti Word.
La modalità agent in Excel e Word è una versione più potente dell’assistente Copilot già presente nella suite Office. Uno dei compiti dell’agente è rendere accessibili agli utenti le complesse funzioni di Excel. L’agente AI è basato sul modello OpenAI GPT-5.
Quando gli viene presentato un compito complesso, lo suddivide in passaggi, crea un piano e spiegazioni, consentendo all’utente di monitorarne i progressi. Ogni passaggio è ulteriormente suddiviso in attività specifiche e ogni azione dell’agente viene visualizzata in una barra laterale.
L’agente AI ha ottenuto un punteggio del 57,2% in Excel nel test SpreadsheetBench, progettato specificamente per valutare la capacità dei modelli di modificare fogli di calcolo. Questo punteggio è superiore a quello di Shortcut.ai, dell’agente ChatGPT e di Anthropic Claude Files Opus 4.1, ma inferiore al punteggio umano del 71,3%.
La modalità Agent in Word non si limita a modificare e riassumere il testo. La modalità Agent prepara bozze di materiali, suggerisce chiarimenti e indica eventuali esigenze di finalizzazione del documento. È possibile consolidare i dati di lavoro di diversi mesi in un unico report, riassumere i risultati del mese e identificare rapidamente le differenze rispetto al report precedente.
L’Office Agent, basato su modelli Anthropic, funziona nella chat di Copilot al di fuori della suite Office, ma consente agli utenti di creare presentazioni PowerPoint e documenti Word direttamente all’interno della chat. Nel caso di PowerPoint, gli utenti ricevono una presentazione strutturata in modo logico, a cui l’IA può accedere da risorse web e visualizzare anteprime delle diapositive durante il processo.
Vale la pena notare che, mentre i modelli OpenAI sono i modelli principali nella suite Office, i modelli di un altro sviluppatore, Anthropic, stanno prendendo sempre più piede nell’ecosistema Microsoft. Lo sviluppatore ha integrato Office Agent nell’app di chat Copilot, accedendo all’API di Anthropic basata su Amazon Web Services, un concorrente diretto di Microsoft. Questo potrebbe spiegare perché la suite Office non abbia ancora una profonda integrazione dei modelli di questo sviluppatore.
La modalità agente AI in Word ed Excel è già disponibile per i partecipanti al programma di funzionalità sperimentali di Frontier: è richiesto un abbonamento a Microsoft 365 Copilot o Microsoft 365 Personal/Family. Sebbene sia attualmente disponibile solo nelle versioni web delle app, sarà presto disponibile anche per le versioni desktop. Anche Office Agent è attualmente disponibile solo per gli utenti Frontier abbonati a Microsoft 365 Copilot e Microsoft 365 Personal/Family negli Stati Uniti.
L'articolo Microsoft lancia Agent Mode in Excel e Word! Meno formule e più intelligenza artificiale proviene da il blog della sicurezza informatica.
Addio star di carne e ossa? Arriva Tilly Norwood, la prima attrice AI!
In un settore un tempo dominato da star dal vivo, i personaggi digitali si stanno facendo sempre più strada. Durante un summit a Zurigo, Ellin van der Velden, attrice, comica e tecnologa, ha annunciato che la sua agenzia di intelligenza artificiale, Xicoia, è in trattative con diversi importanti agenti per ingaggiare il suo primo talento virtuale: un’attrice di intelligenza artificiale di nome Tilly Norwood.
Ellin van der Velden ha presentato la sua iniziativa a un panel dedicato all’intelligenza artificiale nell’industria dell’intrattenimento. Ha descritto lo studio di produzione di intelligenza artificiale Particle6, che in seguito si è evoluto in Xicoia, un’agenzia specializzata nella creazione, gestione e monetizzazione di “star digitali iperrealistiche”. Tilly Norwood è la prima “attrice” di questo tipo in grado di interagire con il pubblico come un personaggio mediatico a tutti gli effetti.
Non si tratta solo di una semplice visualizzazione, ma di integrare completamente i personaggi IA nella catena di produzione. Secondo Ellin van der Velden, a febbraio molti studi erano scettici sui processi di IA :“Tutti dicevano: ‘Non è una cosa seria, non funzionerà“.
Questa immagine non è riferita a Tilly Norwood, ma è un clone sviluppato da Red Hot Cyber con Foocus AI e una RTX4060 di NVIDIA.
Tuttavia, a maggio, la retorica è cambiata radicalmente. Tilly ha iniziato ad attirare l’interesse delle agenzie e ora è in preparazione un annuncio pubblico su chi la prenderà sotto la propria ala protettrice. Questa sarà una delle prime volte in cui un’attrice creata interamente dall’IA avrà una rappresentanza ufficiale nel mondo dello spettacolo.
Anche Verena Pum, ex artista dell’intelligenza artificiale e attuale dipendente di Luma AI, ha confermato il cambiamento di umore nel settore. Ha ricordato che fino a poco tempo fa gli studi negavano apertamente l’utilizzo dell’intelligenza artificiale o nascondevano qualsiasi reale progresso.
“All’inizio dell’anno, i produttori hanno iniziato a contattarmi, chiedendomi di discutere dell’integrazione dell’intelligenza artificiale, chiedendomi di spiegare come costruire pipeline e adattare i flussi di lavoro “, ha detto Pum.
Secondo lei, molti importanti attori del settore stanno già sviluppando progetti utilizzando l’intelligenza artificiale, ma lo fanno sotto NDA e non sono ancora pronti a rendere pubbliche tali notizie.
Altra immagine non è riferita a Tilly Norwood, ma è un clone sviluppato da Red Hot Cyber con Foocus AI e una RTX4060 di NVIDIA.
Tuttavia, sia Ellin van der Velden che Poom concordano su una cosa: nei prossimi mesi possiamo aspettarci importanti annunci dagli studi di Hollywood, dove talenti dell’intelligenza artificiale come Tilly Norwood saranno utilizzati insieme ad attori veri. Poom ha aggiunto che gli studi hanno bisogno di tempo per acquisire sicurezza, ma assisteremo a una serie di annunci pubblici all’inizio del prossimo anno.
L'articolo Addio star di carne e ossa? Arriva Tilly Norwood, la prima attrice AI! proviene da il blog della sicurezza informatica.