"YouTube has introduced a new like button on select videos that rewards your engagement with a brief animation."
Youtube training viewers like dogs.
"Want the treat!? Want the treat?! Do your little trick!
You did! Who's a good viewer?! Who is?! You are! Yes you are!"🐶
theverge.com/news/810319/youtu…
YouTube now rewards your likes with fun animations
There are currently 20 animations customized for different genres, including sports, pets, and travel videos.Andrew Liszewski (The Verge)
Air Delays Will Happen Due to Shutdown: Rep. Graves
https://www.bloomberg.com/news/videos/2025-10-30/air-delays-will-happen-due-to-shutdown-rep-graves-video?utm_source=flipboard&utm_medium=activitypub
Posted into Bloomberg Television @bloomberg-television-bloomberg
Ernest is alive
Iirc, in one of his last public appearances before abandoning Kbin was commenting he had health issues.
Just noticed he has a blog where he occasionally posts, latest post being from September, and in the "about me" section, he also mentions about having to drop Kbin.
Going by his posts and his repositories, he doesn't seem involved with ActivityPub anymore, at least in a public manner. But sharing in case someone worried about the person.
Fitik likes this.
The post Sarah Bosetti will Merz’ Stadtbild-Aussage kritisieren – und stimmt ihm dabei zu appeared first on Apollo News. #news #press
Sarah Bosetti will Merz’ Stadtbild-Aussage kritisieren – und stimmt ihm dabei zu - Apollo News
Auch ZDF-Kabarettistin Sarah Bosetti greift Friedrich Merz für seine Stadtbild-Aussage an. Doch ihr Problem ist: Eigentlich stimmt sie dem Bundeskanzler zu.Redaktion (Apollo News)
Roland Häder🇩🇪 likes this.
Nope, Steve Heilleman* - Biden did NOT wander off "as some older ppl do" during parachutist thing.
He just went off script to go greet & acknowledge the Italian parachutists who'd just performed in his honor. He was then "guided back' b/c Meloni wanted to get back on script:
google.com/search?q=what+was+t…
*John Heilleman on Deadline Whitehouse 10/30/2025 comparing this to Trump's apparent inability to negotiate a simple rectangle during a greeting ceremony in #Japan, never looking at anyone.
Here's a screenshot of a news article display, likely on a mobile device. The top portion features text stating “Guidance from Meloni: Italian Prime Minister was seen guiding Biden back to the group, but this was a normal part of the event’s protocol and did not indicate the person was unaware of surroundings.” Below this text are three article previews.
The first preview shows a video titled “Biden G7 video: conservative media…” with a play button and a duration of “1:06” and is attributed to “NBC News - David In…” with a CNN logo. The second preview mentions "Right-wing outlets…" and is dated “Jun 13, 2024” also attributed to CNN. The third preview does not have any visible title or content.
At the bottom is a call to action which reads "Dive deeper in AI Mode" with a share icon, upvote icon, and a comment icon. The bottom of the screenshot also includes a small Reuters logo.
―
Here's an alt-text description of the image:
The image is a screenshot of text, likely from a news article or online post, detailing a controversial video involving President Biden. The text discusses a video taken during the June2024 G7 summit in Italy, where President Biden walked toward a group of parachutists after a demonstration. The text states the video became controversial due to being "tightly cropped," circulating online and in news outlets. The text also mentions the footage was used to falsely portray President Biden as "disoriented and 'wandering off'" from colleagues. The following bullet points are included: "Cropped footage: Conservative media and the Republican National Committee posted videos that were tightly cropped to exclude the landed parachutists Biden was approaching" and "Misinformation: Without this footage falsely portrayed Biden as disoriented and 'wandering off' from colleagues."
―
Here's an alt-text description of the image:
This image presents a list of bullet points regarding a video featuring President Biden. The background is a light gray, and the text is black. The bullet points detail a situation involving a video clip that went viral, with some outlets using it to criticize the president's mental fitness. The list includes the phrases: "Misinformation: Without this key context, the footage portrayed Biden as disoriented and ‘wandering off’", "Reaction: The manipulated clip went viral, with some outlets using it to criticize the president's mental fitness. Social media platform X (formerly Twitter) later added a community note to the New York Post’s tweet, acknowledging that the video had been cropped", "Full video: Unedited footage from the event shows Biden walking toward and greeting one of the newly landed skydivers with a thumbs-up", and "Polite interaction: British Prime Minister Rishi Sunak, who was also at the summit, later stated that Biden was ‘being very polite’ by going over to talk to the individual skydivers". The final bullet point begins with the phrase "Guidance from Meloni:".
Provided by @altbot, generated privately and locally using Gemma3:27b
🌱 Energy used: 0.835 Wh
Self-Driving Cars and the Fight Over the Necessity of Lidar
If you haven’t lived underneath a rock for the past decade or so, you will have seen a lot of arguing in the media by prominent figures and their respective fanbases about what the right sensor package is for autonomous vehicles, or ‘self-driving cars’ in popular parlance. As the task here is to effectively replicate what is achieved by the human Mark 1 eyeball and associated processing hardware in the evolutionary layers of patched-together wetware (‘human brain’), it might seem tempting to think that a bunch of modern RGB cameras and a zippy computer system could do the same vision task quite easily.
This is where reality throws a couple of curveballs. Although RGB cameras lack the evolutionary glitches like an inverted image sensor and a big dead spot where the optical nerve punches through said sensor layer, it turns out that the preprocessing performed in the retina, the processing in the visual cortex and analysis in the rest of the brain is really quite good at detecting objects, no doubt helped by millions of years of only those who managed to not get eaten by predators procreating in significant numbers.
Hence the solution of sticking something like a Lidar scanner on a car makes a lot of sense. Not only does this provide advanced details on one’s surroundings, but also isn’t bothered by rain and fog the way an RGB camera is. Having more and better quality information makes subsequent processing easier and more effective, or so it would seem.
Computer Vision Things
A Waymo Jaguar I-Pace car in San Francisco. (Credit: Dllu, Wikimedia)
Giving machines the ability to see and recognize objects has been a dream for many decades, and the subject of nearly an infinite number of science-fiction works. For us humans this ability is developed over the course of our development from a newborn with a still developing visual cortex, to a young adult who by then has hopefully learned how to identify objects in their environment, including details like which objects are edible and which are not.
As it turns out, just the first part of that challenge is pretty hard, with interpreting a scene as captured by a camera subject to many possible algorithms that seek to extract edges, infer connections based on various hints as well as the distance to said object and whether it’s moving or not. All just to answer the basic question of which objects exist in a scene, and what they are currently doing.
Approaches to object detection can be subdivided into conventional and neural network approaches, with methods employing convolutional neural networks (CNNs) being the most prevalent these days. These CNNs are typically trained with a dataset that is relevant to the objects that will be encountered, such as while navigating in traffic. This is what is used for autonomous cars today by companies like Waymo and Tesla, and is why they need to have both access to a large dataset of traffic videos to train with, as well as a large collection of employees who watch said videos in order to tag as many objects as possible. Once tagged and bundled, these videos then become CNN training data sets.
This raises the question of how accurate this approach is. With purely RGB camera images as input, the answer appears to be ‘sorta’. Although only considered to be a Class 2 autonomous system according to the SAE’s 0-5 rating system, Tesla vehicles with the Autopilot system installed failed to recognize hazards on multiple occasions, including the side of a white truck in 2016, a concrete barrier between a highway and an offramp in 2018, running a red light and rear-ending a fire truck in 2019.
This pattern continues year after year, with the Autopilot system failing to recognize hazards and engaging the brakes, including in so-called ‘Full-Self Driving’ (FSD) mode. In April of 2024, a motorcyclist was run over by a Tesla in FSD mode when the system failed to stop, but instead accelerated. This made it the second fatality involving FSD mode, with the mode now being called ‘FSD Supervised’.
Compared to the considerably less crash-prone Level 4 Waymo cars with their hard to miss sensor packages strapped to the car, one could conceivably make the case that perhaps just a couple of RGB cameras is not enough for reliable object detection, and that quite possibly blending of sensors is a more reliable method for object detection.
Which is not to say that Waymo cars are perfect, of course. In 2024 one Waymo car managed to hit a utility pole at low speeds during a pullover maneuver, when the car’s firmware incorrectly assessed its response to a situation where a ‘pole-like object’ was present, but without a hard edge between said pole and the road.
This gets us to the second issue with self-driving cars: taking the right decision when confronted with a new situation.
Acting On Perception
The Tesla Hardware 4 mainboard with its redundant custom SoCs. (Source: Autopilotreview.com)
Once you know what objects are in a scene, and merge this with the known state of the vehicle and, the next step for an autonomous vehicle is to decide what to do with this information. Although the tempting answer might be to also use ‘something with neural networks’ here, this has turned out to be a non-viable method. Back in 2018 Waymo created a recursive neural network (RNN) called ChauffeurNet which was trained on both real-life and synthetic driving data to have it effectively imitate human drivers.
The conclusion of this experiment was that while deep learning has a place here, you need to lean mostly on a solid body of rules that provides it with explicit reasoning that copes better with what is called the ‘long tail’ of possible situations, as you cannot put every conceivable situation in a data set.
This thus again turns out to be a place where human input and intelligence are required, as while an RNN or similar can be trained on an impressive data set, it will never be able to learn the reasons for why a decision was made in a training video, nor provide its own reasoning and make reasonable adaptations when faced with a new situation. This is where human experts have to define explicit rules, taking into account the known facts about the current surroundings and state of the vehicle.
Here is where having details like explicit distance information to an obstacle, its relative speed and dimensions, as well as room to divert to prevent a crash are not just nice to have. Adding sensors like radar and Lidar can provide solid data that an RGB camera plus CNN may also provide if you’re lucky, but also maybe not quite. When you’re talking about highway speeds and potentially the lives of multiple people at risk, certainty always wins out.
Tesla Hardware And Sneaky Radars
Arbe Phoenix radar module installed in a Tesla car as part of the Hardware 4 Autopilot hardware. (Credit: @greentheonly, Twitter)
One of the poorly kept secrets about Tesla’s Autopilot system is that it’s had a front-facing radar sensor for most of the time. Starting with Hardware 1 (HW1), it featured a single front-facing camera behind the top of the windshield and a radar behind the lower grille, in addition to 12 ultrasonic sensors around the vehicle.
Notable is that Tesla did not initially use the radar in a primary object detection role here, meaning that object detection and emergency stop functionality was performed using the RGB cameras. This changed after the RGB camera system failed to notice a white trailer against a bright sky, resulting in a spectacular crash. The subsequent firmware update gave the radar system the same role as the camera system, which likely would have prevented that particular crash.
HW1 used Mobileye’s EyeQ3, but after Mobileye cut ties with Tesla, NVidia’s Drive PX 2 was used instead for HW2. This upped the number of cameras to eight, providing a surround view of the car’s surroundings, with a similar forward-facing radar. After an intermedia HW2.5 revision, HW3 was the first to use a custom processor, featuring twelve Arm Cortex-A72 cores clocked at 2.6 GHz.
HW3 initially also had a radar sensor, but in 2021 this was eliminated with the ‘Tesla Vision’ system, which resulted in a significant uptick in crashes. In 2022 it was announced that the ultrasonic sensors for short-range object detection would be removed as well.
Then in January of 2023 HW4 started shipping, with even more impressive computing specs and 5 MP cameras instead of the previous 1.2 MP ones. This revision also reintroduced the forward-facing radar, apparently the Arbe Phoenix radar with a 300 meter range, but not in the Model Y. This indicates that RGB camera-only perception is still the primary mode for Tesla cars.
Answering The Question
At this point we can say with a high degree of certainty that by just using RGB cameras it is exceedingly hard to reliably stop a vehicle from smashing into objects, for the simple reason that you are reducing the amount of reliable data that goes into your decision-making software. While the object-detecting CNN may give a 29% possibility of an object being right up ahead, the radar or Lidar will have told you that a big, rather solid-looking object is lying on the road. Your own eyes would have told you that it’s a large piece of concrete that fell off a truck in front of you.
This then mostly leaves the question of whether the front-facing radar that’s present in at least some Tesla cars is about as good as the Lidar contraption that’s used by other car manufacturers like Volvo, as well as the roof-sized version by Waymo. After all, both work according to roughly the same basic principles.
That said, Lidar is superior when it comes to aspects like accuracy, as radar uses longer wavelengths. At the same time a radar system isn’t bothered as much by weather conditions, while generally being cheaper. For Waymo the choice for Lidar over radar comes down to this improved detail, as they can create a detailed 3D image of the surroundings, down to the direction that a pedestrian is facing, and hand signals by cyclists.
Thus the shortest possible answer is that yes, Lidar is absolutely the best option, while radar is a pretty good option to at least not drive into that semitrailer and/or pedestrian. Assuming your firmware is properly configured to act on said object detection, natch.
100 pacchetti di Infostealer caricati su NPM sfruttando le allucinazioni delle AI
Da agosto 2024, la campagna PhantomRaven ha caricato 126 pacchetti dannosi su npm, che sono stati scaricati complessivamente oltre 86.000 volte. La campagna è stata scoperta da Koi Security, che ha riferito che gli attacchi sono stati abilitati da una funzionalità poco nota di npm che gli consente di aggirare la protezione e il rilevamento.
Si sottolinea che al momento della pubblicazione del rapporto erano ancora attivi circa 80 pacchetti dannosi. Gli esperti spiegano che gli aggressori sfruttano il meccanismo Remote Dynamic Dependencies (RDD).
In genere, uno sviluppatore vede tutte le dipendenze di un pacchetto in fase di installazione, scaricate dall’infrastruttura NPM attendibile. Tuttavia, RDD consente ai pacchetti di estrarre automaticamente il codice da URL esterni, anche tramite un canale HTTP non crittografato. Nel frattempo, il manifest del pacchetto non mostra alcuna dipendenza.
Quando uno sviluppatore esegue npm install, il pacchetto dannoso scarica silenziosamente un payload da un server controllato dagli aggressori e lo esegue immediatamente. Non è richiesta alcuna interazione da parte dell’utente e gli strumenti di analisi statica rimangono inconsapevoli dell’attività.
“PhantomRaven dimostra quanto possano essere sofisticati gli aggressori quando sfruttano i punti ciechi delle soluzioni di sicurezza tradizionali. Le dipendenze dinamiche remote sono semplicemente invisibili all’analisi statica”, affermano i ricercatori.
Si noti che il malware viene scaricato dal server ogni volta che il pacchetto viene installato, anziché essere memorizzato nella cache.
Questo apre le porte ad attacchi mirati: gli aggressori possono controllare l’indirizzo IP della richiesta e inviare codice innocuo ai ricercatori di sicurezza, distribuire codice dannoso per le reti aziendali e distribuire payload specializzati per gli ambienti cloud.
Una volta infettato, il malware raccoglie attentamente informazioni sul sistema della vittima:
- variabili di ambiente con configurazioni dei sistemi interni dello sviluppatore;
- token e credenziali per npm, GitHub Actions, GitLab, Jenkins e CircleCI;
- l’intero ambiente CI/CD attraverso il quale passano le modifiche al codice apportate da diversi sviluppatori.
I token rubati possono essere utilizzati per attaccare le supply chain e iniettare codice dannoso in progetti legittimi. Il furto di dati è organizzato in modo ridondante, utilizzando tre metodi: HTTP GET con dati nell’URL, HTTP POST con JSON e connessioni WebSocket.
Gli esperti scrivono che molti pacchetti dannosi sono mascherati da strumenti GitLab e Apache.
Lo slopsquatting, ovvero lo sfruttamento delle allucinazioni dell’intelligenza artificiale, gioca un ruolo speciale in questa campagna. Gli sviluppatori chiedono spesso agli assistenti LLM quali pacchetti siano più adatti a un particolare progetto. I modelli di intelligenza artificiale spesso inventano nomi inesistenti ma plausibili. Gli operatori PhantomRaven tracciano queste allucinazioni e registrano i pacchetti con questi nomi. Le vittime alla fine installano il malware da sole, seguendo le raccomandazioni di LLM.
Gli sviluppatori di LLM non comprendono ancora le cause esatte di queste allucinazioni e non sono in grado di creare modelli che le prevengano, ed è proprio questo che gli aggressori stanno sfruttando. I ricercatori ricordano di non affidarsi a LLM nella scelta delle dipendenze e di controllare attentamente i nomi dei pacchetti e le loro fonti, installando solo pacchetti provenienti da fornitori affidabili.
L'articolo 100 pacchetti di Infostealer caricati su NPM sfruttando le allucinazioni delle AI proviene da Red Hot Cyber.
Atroposia: la piattaforma MaaS che fornisce un Trojan munito di scanner delle vulnerabilità
I ricercatori di Varonis hanno scoperto la piattaforma MaaS (malware-as-a-service) Atroposia. Per 200 dollari al mese, i suoi clienti ricevono un Trojan di accesso remoto con funzionalità estese, tra cui desktop remoto, gestione del file system, furto di informazioni, credenziali, contenuto degli appunti, wallet di criptovalute, dirottamento DNS e uno scanner integrato per le vulnerabilità locali.
Secondo gli analisti, Atroposia ha un’architettura modulare. Il malware comunica con i server di comando e controllo tramite canali crittografati ed è in grado di bypassare il Controllo Account Utente (UAC) per aumentare i privilegi in Windows.
Una volta infettato, fornisce un accesso persistente e non rilevabile al sistema della vittima. I moduli chiave di Atroposia sono:
HRDP Connect avvia una sessione di desktop remoto nascosta in background, consentendo agli aggressori di aprire applicazioni, leggere documenti ed e-mail e, in generale, interagire con il sistema senza alcun segno visibile di attività dannosa. I ricercatori sottolineano che gli strumenti standard di monitoraggio dell’accesso remoto potrebbero “non rilevare” questa attività.
Il file manager funziona come un familiare Esplora risorse di Windows: gli aggressori possono visualizzare, copiare, eliminare ed eseguire i file. Il componente grabber cerca i dati per estensione o parola chiave, li comprime in archivi ZIP protetti da password e li invia al server di comando e controllo utilizzando metodi in-memory, riducendo al minimo le tracce dell’attacco sul sistema.
Stealer raccoglie dati di accesso salvati, dati del portafoglio di criptovalute e file di chat. Il gestore degli appunti intercetta tutto ciò che l’utente copia (password, chiavi API, indirizzi del portafoglio) in tempo reale e lo conserva per gli aggressori.
Il modulo di spoofing DNS sostituisce i domini con gli indirizzi IP degli aggressori a livello di host, reindirizzando silenziosamente le vittime verso server controllati dagli hacker. Questo apre le porte a phishing, attacchi MitM, falsi aggiornamenti, iniezione di adware o malware e furto di dati tramite query DNS.
Lo scanner di vulnerabilità integrato analizza il sistema della vittima alla ricerca di vulnerabilità non corrette, impostazioni non sicure e software obsoleto. I risultati vengono inviati agli operatori di malware sotto forma di punteggio, che gli aggressori possono utilizzare per pianificare ulteriori attacchi.
I ricercatori avvertono che questo modulo è particolarmente pericoloso negli ambienti aziendali: il malware potrebbe rilevare un client VPN obsoleto o una vulnerabilità di escalation dei privilegi, che può quindi essere sfruttata per ottenere informazioni più approfondite sull’infrastruttura della vittima. Inoltre, lo scanner analizza i sistemi vulnerabili nelle vicinanze per rilevare eventuali movimenti laterali.
Varonis osserva che Atroposia prosegue la tendenza verso la democratizzazione del crimine informatico.
Insieme ad altre piattaforme MaaS (come SpamGPT e MatrixPDF), riduce la barriera tecnica all’ingresso, consentendo anche ad aggressori poco qualificati di condurre efficaci “attacchi in abbonamento”.
L'articolo Atroposia: la piattaforma MaaS che fornisce un Trojan munito di scanner delle vulnerabilità proviene da Red Hot Cyber.
Giubileo mondo educativo. Card. Tolentino de Mendonça: “L’educazione è il nuovo nome della pace. Serve un nuovo patto di futuro” - AgenSIR
Seminare futuro. “La scuola cattolica … semina futuro”: con questa citazione di Papa Leone XIV, il card.Giovanna Pasqualin Traversa (AgenSIR)
Flexibility!
Giovedì 6 novembre (e ogni giovedì), dalle 19:00 alle 20:00, presso Villa Occupata, Via Litta Modignani 66
Corso di flexibility! Stretching e allungamento dinamico per ogni livello di incricchiamento!
Porta il tappetino se ce l’hai....e ovviamente vestiti comodi!
Parole d'evasione: Scritti carcerati
Lunedì 3 novembre, dalle 18:30 alle 19:30, presso COX18, Via Conchetta 18, Milano; e anche online, su inventati.org/apm/agenda.php?s…
Parole d’evasione
A partire dal libro di Marco Nocente, “Non è più il carcere di una volta. Lo spazio detentivo nelle lettere al collettivo OLGa” (Meltemi, 2025), tre incontri in cui saranno approfonditi, con l’autore, alcuni temi oggi cruciali dietro le sbarre.
• Lunedì 3 novembre 2025 ore 18.30-19.30
Scritti carcerati
In questo incontro si discuterà intorno alle scritture carcerate, alle loro diverse forme, alle difficoltà e ricchezze degli scambi e delle testimonianze fra dentro e fuori.
Are you old, or do you know who Sombr is?
The music phenomenon known as Sombr, explained gently for old millennialsAlex Abad-Santos (Vox)
Roland Häder🇩🇪 likes this.
2022 trip to #MountDesertRock, part 3.
The view from the top of the lighthouse tower.
{📷: Sony a7iii}
•
#DigitalPhotography #photography #sony #SonyAlpha #SonyA7iii #tamron #maine #MaineCoast #islands #MaineIslands #GulfOfMaine #ocean #AtlanticOcean #offshore #lighthouse #MaineLighthouse #summer
PlayStation: jogos para PS4 e PS5 com até 90% OFF na PS Store
https://www.tecmundo.com.br/voxel/502704-playstation-jogos-para-ps4-e-ps5-com-ate-90-off-na-ps-store.htm?utm_source=flipboard&utm_medium=activitypub
Posted into TecMundo @tecmundo-TecMundo
Cagliari-Sassuolo, le pagelle: Pinamonti chirurgico (6,5), Folorunsho non incide (5,5)
https://www.gazzetta.it/Calcio/Serie-A/Cagliari/30-10-2025/cagliari-sassuolo-1-2-le-pagelle_preview.shtml?reason=unauthenticated&utm_source=flipboard&utm_medium=activitypub
Pubblicato su Calcio @calcio-Gazzetta
Cagliari-Sassuolo, le pagelle: Pinamonti chirurgico (6,5), Folorunsho non incide (5,5)
Promossa tutta la formazione di Grosso, bene anche la difesa. Zappa coordina le retrovie e prende anche una traversaMatteo Brega (La Gazzetta dello Sport)
🌎 « Quels pays emploient le plus dans le secteur public en Europe ? »
🔴 reddit.com/r/jaimelescartes/co…
🔴 fr.statista.com/infographie/30…
🔴 alternatives-economiques.fr/la…
🔴 franceinfo.fr/monde/europe/ele…
REPORTAGE. "On avait beau alerter, les responsables politiques n'ont pas écouté" : en Allemagne, la vétusté de
Des chantiers titanesques attendent le prochain chancelier, alors que les élections ont lieu dimanche. Un pont, situé entre Dortmund et Francfort, est devenu le symbole du manque d'investissement dans les infrastructures.Sébastien Baer (Franceinfo)
#humor #decorating #politics
Respond to every TERF dog whistle gotcha with "What are you, a cop?"
"What is a woman?"
"What are you, a cop?"
"Should men be allowed in women's bathrooms?"
"What are you, a cop?"
"Do you have a penis or a vagina?"
"What are you, a cop?"
And if they say "Yes, I'm a cop."
Then the clear response is "🖕 Fuck the police." And walk out.
reshared this
"What's in your pants?"
I'd like to read your privacy policy so I can better understand how this information will be processed.
How the physics of baseball could help Kevin Gausman and the Blue Jays win the World Series
https://theconversation.com/how-the-physics-of-baseball-could-help-kevin-gausman-and-the-blue-jays-win-the-world-series-268732?utm_source=flipboard&utm_medium=activitypub
Posted into Science & Tech @science-tech-ConversationCA
How the physics of baseball could help Kevin Gausman and the Blue Jays win the World Series
With the Toronto Blue Jays on the cusp of a World Series title, pitcher Kevin Gausman’s mastery of the splitter is not just athletic skill, it’s a brilliant application of physics.The Conversation
Tra Washington e Pechino una tregua che potrebbe reggere
https://tg24.sky.it/mondo/video/2025/10/30/tra-washington-e-pechino-una-tregua-che-potrebbe-reggere-1047671?utm_source=flipboard&utm_medium=activitypub
Pubblicato su Mondo @mondo-SkyTG24
Tra Washington e Pechino una tregua che potrebbe reggere
Leggi su Sky TG24 l'articolo Tra Washington e Pechino una tregua che potrebbe reggereRedazione Sky TG24 (Sky TG24)
Is dealing with the Dept. of Work & Pensions fatal?
Well, if you are a parent using the DWP's Child Maintenance Service, you look to have greater risk of death than those who are not using the service....
We know how difficult dealing with the toxic & callous DWP can be but it seems for 'paying parents' using the CMS the stress & tensions of using the DWP to pay ensure maintenance may lead to more premature deaths.
disabilitynewsservice.com/shoc…
‘Shocking’ figures show parents linked to DWP service face death rates up to three times higher
Parents who pay to support a child through the Department for Work and Pensions (DWP) and its Child Maintenance Service (CMS) face death rates up to three times higher than others the same age, acc…Disability News Service
You should be ashamed to be buying into this kind of bullshit.
CORRELATION IS NOT FUCKING CAUSATION.
#AIPAC HACK #RitchieTorres Gets a NEW Challenger (w/ #AndreEaston)
#BriahnaJoyGray #BadFaith #Politics #Israel
youtube.com/watch?v=bIW2vmLUpV…
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
Samantha Xavia
in reply to Auster • • •Sensitive content