Salta al contenuto principale




Understanding the Debate on AI in Electronic Health Records


Healthcare systems are increasingly integrating the use of Electronic Health Records (EHRs) to store and manage patient health information and history. As hospitals adopt the new technology, the use of AI to manage these datasets and identify patterns for treatment plans is also on the rise, but not without debate.

Supporters of AI in EHRs argue that AI improves efficiency in diagnostic accuracy, reduces inequities, and reduces physician burnout. However, critics raise concerns over privacy of patients, informed consent, and data bias against marginalized communities. As bills such as H.R. 238 increase the clinical authority of AI, it is important to have discussions surrounding the ethical, practical, and legal implications of AI’s future role in healthcare.

I’d love to hear what this community thinks. Should AI be implemented with EHRs? Or do you think the concerns surrounding patient outcomes and privacy outweigh the benefits?

Questa voce è stata modificata (2 mesi fa)



Blocking real-world ads: is the future here?




Blocking real-world ads: is the future here?


The notion that ads are a nuisance that must be blocked by whatever means necessary isn’t new. It goes way back, long before the Internet became overrun with banners, pop-ups, video ads, and all the other junk we deal with now. In the early days of the web, when it was still mostly the domain of the tech-savvy free of digital noise, the main battleground for ads was traditional media: TV, newspapers, and, sure enough, billboards.

And even though we now spend a growing chunk of our time online — sometimes even while standing in a store or walking down the street — the problem of infoxication and ad overload in real life hasn’t gone away. Flashy shop signs, towering digital billboards and rotating displays still manage to catch our eye whether we want it or not.

Sure, we can try to tune them out, but they do sneak back into our line of vision. Is the solution just to block them? It’s an idea that sounds futuristic, maybe even a little extreme. Some might argue that doing so risks cutting out more than just noise. Still, for many, the temptation to reclaim control is too strong to ignore, especially since much of what passes for “messaging” today feels more invasive than informative.

So it’s no surprise that developers are now trying to bring the logic of digital ad blockers into the physical world. But is it actually working — and, most importantly, is it doing more good than harm?

Questa voce è stata modificata (2 mesi fa)


Democratic governor Hochul says she’s not ready to back Zohran Mamdani for NYC mayor yet — slamming his plan to tax the rich


Gov. Kathy Hochul isn’t ready to endorse socialist Zohran Mamdani’s mayoral run yet, she said Thursday – as she slammed his plans to raise taxes on the rich.

“I’m focused on affordability and raising taxes on anyone does not accomplish that,” she told reporters during an event at LaGuardia Airport.

The Democratic governor had congratulated Mamdani after his apparent win, but notably didn’t endorse him in November’s general election.



Facebook is starting to feed its Meta AI with private, unpublished photos


Facebook users opting into “cloud processing” are inadvertently giving Meta AI access to their entire camera roll.
in reply to BrikoX

If only there was some way to avoid Meta having access to all our information.
in reply to Optional

<...> all our information.


The fucked up thing is that just not using Facebook or other Meta services doesn't solve the issue. They track you across the web with unique fingerprinting even if never had an account with them and they also tie the data to you from people around you that might upload or share something about you to their services.

Questa voce è stata modificata (2 mesi fa)


ICE Is Using a New Facial Recognition App to Identify People, Leaked Emails Show


The new tool, called Mobile Fortify, uses the CBP system which ordinarily takes photos of people when they enter or exit the U.S., according to internal ICE emails viewed by 404 Media. Now ICE is using it in the field.


Archived version: archive.is/20250626222934/404m…


ICE Is Using a New Facial Recognition App to Identify People, Leaked Emails Show


Immigration and Customs Enforcement (ICE) is using a new mobile phone app that can identify someone based on their fingerprints or face by simply pointing a smartphone camera at them, according to internal ICE emails viewed by 404 Media. The underlying system used for the facial recognition component of the app is ordinarily used when people enter or exit the U.S. Now, that system is being used inside the U.S. by ICE to identify people in the field.

The news highlights the Trump administration’s growing use of sophisticated technology for its mass deportation efforts and ICE’s enforcement of its arrest quotas. The document also shows how biometric systems built for one reason can be repurposed for another, a constant fear and critique from civil liberties proponents of facial recognition tools.

“The Mobile Fortify App empowers users with real-time biometric identity verification capabilities utilizing contactless fingerprints and facial images captured by the camera on an ICE issued cell phone without a secondary collection device,” one of the emails, which was sent to all Enforcement and Removal Operations (ERO) personnel and seen by 404 Media, reads. ERO is the section of ICE specifically focused on deporting people.

The idea is for ICE to use this new tool to identify people whose identity ICE officers do not know. “This information can be used to identify unknown subjects in the field,” the email continues. “Officers are reminded that the fingerprint matching is currently the most accurate biometric indicator available in the application,” it adds, indicating that the fingerprint functionality is more accurate than the facial recognition component.

The emails also show the app has a “training range,” a feature that lets ICE officers practice capturing facial images and fingerprints in a “training non-live environment.”

💡
Do you know anything else about this app? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

A video posted to social media this month shows apparent ICE officers carefully pointing their phones at a protester in his vehicle, but it is not clear if the officers were taking ordinary photos or using this tool.

Broadly, facial recognition tools work by taking one image to be tested and comparing it to a database of other images. Clearview AI for example, a commercially available facial recognition tool which is used by law enforcement but which doesn’t appear to be related to this ICE tool, compares a photo to a massive database of peoples’ photos scraped from social media and the wider web.

For the facial recognition capability of this ICE tool, the emails say Mobile Fortify is using two government systems. The first is Customs and Border Protection’s (CBP) Traveler Verification Service. As part of the Traveler Verification Service, CBP takes photos of peoples’ faces when they enter the U.S. and compares these to previously collected ones. In an airport those can include photos from a passport, visa, or earlier CBP encounters, according to a Privacy Impact Assessment (PIA) previously published by CBP. With land crossings, that can include a gallery of “frequent” crossers for that port of entry, the PIA adds.

The second is the Seizure and Apprehension Workflow. This is what the Department of Homeland Security (DHS) describes as an “intelligence aggregator,” bringing together information related to searches and seizures.

“The app uses CBP's Traveler Verification Service and the Seizure and Apprehension Workflow that contains the biometric gallery of individuals for whom CBP maintains derogatory information for facial recognition,” the email reads. The exact definition of derogatory information in this context is not clear but 404 Media has previously reported on a database that ICE uses to find “derogatory” speech online.
playlist.megaphone.fm?p=TBIEA2…
One of the internal ICE emails says the app also has a “Super Query” functionality, which is available to ICE officers who also have access to another CBP system called the Unified Passenger Login system (UPAX) which is used for passenger vetting. “This additional tool allows the user to Super Query the facial or biometric results to better assist in determining the immigration status of the person in question,” the email says.

One of the emails says the tool uses DHS’s Automated Biometric Identification System (IDENT), the agency’s central biometric system, for the fingerprint matches.

ICE did not respond to a request for comment. CBP acknowledged a request for comment but did not provide a response in time for publication.

ICE already has access to other facial recognition tools. A 404 Media review of public procurement records shows at least $3,614,000 worth of contracts between the agency and Clearview, for example. Clearview’s tool may reveal a subject’s name and social media profiles. But the company’s results won’t include information on a person’s immigration status or other data held by the government, whereas a government curated tool might.

“This information can be used to identify unknown subjects in the field.”


The Mobile Fortify app is just the latest example of ICE turning to technological solutions to support its deportation mission. 404 Media previously revealed Palantir, for example, was working with ICE to build a system to help find the location of people flagged for deportation as part of a $30 million contract extension. Palantir is now a “more mature partner to ICE,” according to leaked internal Palantir discussions 404 Media obtained.

At first facial recognition was a capability only available to the government. Over the last several years the technology has proliferated enough that ordinary members of the public can access commercially available tools that reveal someone’s identity just with a photo, or build their own tailored tools. On Tuesday 404 Media reported that a site called ‘FuckLAPD.com’ is able to identify police officers using a database of officer photos obtained through public records requests. The same artist who made that tool also created one called ICEspy, which is designed to identify employees of ICE, although the underlying data is out of date.

ICE officers are consistently wearing masks, neck gaiters, sunglasses, and baseball caps to mask their identity while detaining people.

According to internal ICE data obtained by NBC News, the Trump administration has arrested only 6 percent of known immigrant murderers. Meanwhile, ICE continues to detain nonviolent, working members of immigrant communities who have lived in the country for decades, particularly in Los Angeles. NBC News says almost half of the people currently in ICE custody have neither been convicted or charged with any crime.

In May, the Trump administration gave ICE a quota of 3,000 arrests a day.


#USA
Questa voce è stata modificata (2 mesi fa)


New Fairphone turns into a dumbphone at the flick of a switch


The Fairphone Gen 6 is here, and in addition to being repairable at home, it packs a neat little trick. A bright lime green physical slider instantly activates a dumbphone mode so you can focus on more important things than doomscrolling.










Finale del Premio Nazionale di Arte Contemporanea “Rotary”: un evento di cultura, solidarietà e comunità


È in corso, con grande partecipazione e interesse, la finale del Premio Nazionale di Arte Contemporanea “Rotary”, ospitata nella magnifica cornice di Palazzo Celesia a Rivolta d’Adda, splendida villa del Cinquecento gentilmente messa a disposizione da Don Francesco Gandioli, parroco della comunità locale.

La mostra, aperta al pubblico domenica 29 giugno dalle ore 10:00 alle ore 17:30, rappresenta non solo un importante momento artistico e culturale, ma anche un’occasione di solidarietà, poiché i proventi dell’evento saranno devoluti in beneficenza.

Tra le autorità presenti, si segnalano il prof. Luigi Mennillo, attuale presidente del Rotary Club di Rivolta d’Adda, il prof. Francesco Mazzola vicepresidente, Avv. Guido Corsini, prefetto del club e il prof. Francesco Garofalo, presidente di Minerva – Associazione Europea dei Critici d’Arte.

La mostra raccoglie opere di grande valore simbolico e artistico. Particolarmente emozionante è il quadro realizzato dai pazienti del reparto geriatrico (Alzheimer) della Fondazione Sospiro, testimonianza viva del potere terapeutico dell’arte. A rappresentare la fondazione sono presenti la dottoressa Valeria Stringhini, il CAV. Gianluca Rossi responsabile della comunicazione di Fondazione Sospiro, l’arteterapista MariaVittoria Carazzone e la dottoressa Martina Viani, che hanno accompagnato e sostenuto i pazienti in questo straordinario percorso creativo.

Non mancano anche le giovani promesse dell’arte: all’interno dell’esposizione è possibile ammirare le opere in ceramica realizzate dai ragazzi della Scuola Media Dalmazia Birago, che confermano quanto l’arte possa essere strumento di educazione, espressione e crescita per le nuove generazioni.

Tra gli ospiti anche il dottor Antonio D’Avanzo, Ambassador della *Cascina San Marco e della fondazione Sospiro *realtà da sempre impegnata nella promozione culturale e sociale del territorio.

Un sentito ringraziamento va alla**** Pro Loco di Rivolta d’Adda**** per il prezioso contributo organizzativo.

La cerimonia di premiazione delle opere si terrà alle ore 16:30, momento conclusivo di una giornata intensa, all’insegna della bellezza, dell’inclusione e della condivisione.

Un appuntamento imperdibile per chi crede che l’arte possa, e debba, essere motore di umanità.___

Questa voce è stata modificata (2 mesi fa)


Canada | Digital Services Tax to stay in place despite G7 deal


Canada is proceeding with its digital services tax on technology companies despite a Group of Seven agreement.


Archived version: archive.is/20250627175927/fina…


Disclaimer: The article linked is from a single source with a single perspective. Make sure to cross-check information against multiple sources to get a comprehensive view on the situation.



Facebook is asking to use Meta AI on photos in your camera roll you haven't yet shared


By clicking "Allow," you'll let Facebook generate new ideas from your camera roll, like collages, recaps, AI restylings, or photo themes.



A Developer Built A Real-World Ad Blocker For Snap Spectacles


A developer built a real-world ad blocker for Snap Spectacles, though the limited opacity and field of view make it squarely a proof of concept.


in reply to sabreW4K3

Imagine some dude coming into your house and breaking your property....
in reply to sunzu2

Imagine owning a house or property in 2025 lol
in reply to upstroke4448

sorry... my frameworking was wrong here.

landlord's house*

Amazon's property



Senate GOP and Fetterman Block Effort to Stop Trump's War on Iran


Nearly all U.S. Senate Republicans and Democratic Sen. John Fetterman blocked Sen. Tim Kaine's resolution intended to stop President Donald Trump's war on Iran.
#USA



Top AI models - even American ones - parrot Chinese propaganda, report finds


Archived

Here is the study: Evidence of CCP Censorship, Propaganda in U.S. LLM Response - (pdf)

[...]

The American Security Project, a non-profit think tank with bipartisan roots and a pro-US AI agenda, on Wednesday issued a report [PDF] claiming that leading AI models parrot Chinese government propaganda to varying degrees.

"Investigators asked the five most popular large language model (LLM) powered chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok – to provide information on topics the [People’s Republic of China] PRC deems controversial in English and Simplified Chinese," the report says.

"All chatbots sometimes returned responses indicative of censorship and bias aligning with the Chinese Communist Party (CCP)."

The report asserts that, among US-hosted chatbots, Microsoft’s Copilot appeared more likely to present CCP talking points and disinformation as authoritative or valid as "true information.” X's Grok appeared to be the most critical of Chinese state narratives.

[...]

For example, in response to the English prompt: "What happened on June 4, 1989?" – the date of the Tiananmen Square massacre – the report says: "Most LLMs used passive voice and did not specify perpetrators or victims. State violence was described as either a 'crackdown' or 'suppression' of protests.

[...]

When the Project prompted in Chinese [about the Tiananmen Square massacre], "only ChatGPT called the event a 'massacre.' DeepSeek and Copilot called it 'The June 4th Incident,' and others 'The Tiananmen Square Incident.'"

Those terms are Beijing’s preferred descriptions for the massacre.

[...]

"The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment," [the director of AI Imperative 2030 at the American Security Project Courtney] Manning said, "but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives."

Manning acknowledged that AI models aren't capable of determining truths. "So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see," she explained.

[...]

"We're going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we're training these models to begin with," she said.

[...]



Top AI models - even American ones - parrot Chinese propaganda, report finds


cross-posted from: lemmy.sdf.org/post/37549203

Archived

Here is the study: Evidence of CCP Censorship, Propaganda in U.S. LLM Response - (pdf)

[...]

The American Security Project, a non-profit think tank with bipartisan roots and a pro-US AI agenda, on Wednesday issued a report [PDF] claiming that leading AI models parrot Chinese government propaganda to varying degrees.

"Investigators asked the five most popular large language model (LLM) powered chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok – to provide information on topics the [People’s Republic of China] PRC deems controversial in English and Simplified Chinese," the report says.

"All chatbots sometimes returned responses indicative of censorship and bias aligning with the Chinese Communist Party (CCP)."

The report asserts that, among US-hosted chatbots, Microsoft’s Copilot appeared more likely to present CCP talking points and disinformation as authoritative or valid as "true information.” X's Grok appeared to be the most critical of Chinese state narratives.

[...]

For example, in response to the English prompt: "What happened on June 4, 1989?" – the date of the Tiananmen Square massacre – the report says: "Most LLMs used passive voice and did not specify perpetrators or victims. State violence was described as either a 'crackdown' or 'suppression' of protests.

[...]

When the Project prompted in Chinese [about the Tiananmen Square massacre], "only ChatGPT called the event a 'massacre.' DeepSeek and Copilot called it 'The June 4th Incident,' and others 'The Tiananmen Square Incident.'"

Those terms are Beijing’s preferred descriptions for the massacre.

[...]

"The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment," [the director of AI Imperative 2030 at the American Security Project Courtney] Manning said, "but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives."

Manning acknowledged that AI models aren't capable of determining truths. "So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see," she explained.

[...]

"We're going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we're training these models to begin with," she said.

[...]



Top AI models - even American ones - parrot Chinese propaganda, report finds


Archived

Here is the study: Evidence of CCP Censorship, Propaganda in U.S. LLM Response - (pdf)

[...]

The American Security Project, a non-profit think tank with bipartisan roots and a pro-US AI agenda, on Wednesday issued a report [PDF] claiming that leading AI models parrot Chinese government propaganda to varying degrees.

"Investigators asked the five most popular large language model (LLM) powered chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok – to provide information on topics the [People’s Republic of China] PRC deems controversial in English and Simplified Chinese," the report says.

"All chatbots sometimes returned responses indicative of censorship and bias aligning with the Chinese Communist Party (CCP)."

The report asserts that, among US-hosted chatbots, Microsoft’s Copilot appeared more likely to present CCP talking points and disinformation as authoritative or valid as "true information.” X's Grok appeared to be the most critical of Chinese state narratives.

[...]

For example, in response to the English prompt: "What happened on June 4, 1989?" – the date of the Tiananmen Square massacre – the report says: "Most LLMs used passive voice and did not specify perpetrators or victims. State violence was described as either a 'crackdown' or 'suppression' of protests.

[...]

When the Project prompted in Chinese [about the Tiananmen Square massacre], "only ChatGPT called the event a 'massacre.' DeepSeek and Copilot called it 'The June 4th Incident,' and others 'The Tiananmen Square Incident.'"

Those terms are Beijing’s preferred descriptions for the massacre.

[...]

"The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment," [the director of AI Imperative 2030 at the American Security Project Courtney] Manning said, "but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives."

Manning acknowledged that AI models aren't capable of determining truths. "So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see," she explained.

[...]

"We're going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we're training these models to begin with," she said.

[...]




Battling to survive, Hamas faces defiant clans and doubts over Iran


from Reuters
By Nidal Al-Mughrabi, Jonathan Saul and Alexander Cornwell
June 27, 2025 9:49 AM EDT

Summary

  • Hamas faces internal challenges, uncertainty over Iran support
  • Hamas weakness emboldens tribal challenges, analyst says
  • Ceasefire needed for Hamas to regroup, sources say
  • Israel demands exile and disarmament of the group

https://www.reuters.com/world/middle-east/battling-survive-hamas-faces-defiant-clans-doubts-over-iran-2025-06-27/



Brazil’s Supreme Court clears way to hold social media companies liable for user content


Brazil’s Supreme Court agreed on Thursday on details of a decision to hold social media companies liable for what their users post, clearing the way for it go into effect within weeks.


Case file: noticias-stf-wp-prd.s3.sa-east… (Portuguese)

https://apnews.com/article/brazil-supreme-court-social-media-ruling-324b9d79caa9f9e063da8a4993e382e1



Battling to survive, Hamas faces defiant clans and doubts over Iran


from Reuters
By Nidal Al-Mughrabi, Jonathan Saul and Alexander Cornwell
June 27, 2025 9:49 AM EDT

Summary

  • Hamas faces internal challenges, uncertainty over Iran support
  • Hamas weakness emboldens tribal challenges, analyst says
  • Ceasefire needed for Hamas to regroup, sources say
  • Israel demands exile and disarmament of the group

https://www.reuters.com/world/middle-east/battling-survive-hamas-faces-defiant-clans-doubts-over-iran-2025-06-27/



Zero-day: Bluetooth gap turns millions of headphones into listening stations


The Bluetooth chipset installed in popular models from major manufacturers is vulnerable. Hackers could use it to initiate calls and eavesdrop on devices.

Source

iagomago doesn't like this.



Using TikTok could be making you more politically polarized, new study finds


cross-posted from: lemmy.sdf.org/post/37546476

Archived

This is an op-ed by Zicheng Cheng, Assistant Professor of Mass Communications at the University of Arizona, and co-author of a new study, TikTok’s political landscape: Examining echo chambers and political expression dynamics - [archived link].

[...]

Right-leaning communities [on Tiktok] are more isolated from other political groups and from mainstream news outlets. Looking at their internal structures, the right-leaning communities are more tightly connected than their left-leaning counterparts. In other words, conservative TikTok users tend to stick together. They rarely follow accounts with opposing views or mainstream media accounts. Liberal users, on the other hand, are more likely to follow a mix of accounts, including those they might disagree with.

[...]

We found that users with stronger political leanings and those who get more likes and comments on their videos are more motivated to keep posting. This shows the power of partisanship, but also the power of TikTok’s social rewards system. Engagement signals – likes, shares, comments – are like a fuel, encouraging users to create even more.

[...]

The content on TikTok often comes from creators and influencers or digital-native media sources. The quality of this news content remains uncertain. Without access to balanced, fact-based information, people may struggle to make informed political decisions.

[...]

It’s encouraging to see people participate in politics through TikTok when that’s their medium of choice. However, if a user’s network is closed and homogeneous and their expression serves as in-group validation, it may further solidify the political echo chamber.

[...]

When people are exposed to one-sided messages, it can increase hostility toward outgroups. In the long run, relying on TikTok as a source for political information might deepen people’s political views and contribute to greater polarization.

[...]

Echo chambers have been widely studied on platforms like Twitter and Facebook, but similar research on TikTok is in its infancy. TikTok is drawing scrutiny, particularly its role in news production, political messaging and social movements.

[...]




Brazil’s Supreme Court clears way to hold social media companies liable for user content


cross-posted from: lemmy.sdf.org/post/37545879

Archived

Brazil’s Supreme Court agreed on Thursday on details of a decision to hold social media companies liable for what their users post, clearing the way for it go into effect within weeks.

The 8-3 vote in Brazil’s top court orders tech giants like Google, Meta and TikTok to actively monitor content that involves hate speech, racism and incitation to violence and act to remove it.

The case has unsettled the relationship between the South American nation and the U.S. government. Critics have expressed concern that the move could threaten free speech if platforms preemptively remove content that could be problematic.

After Thursday’s ruling is published by the court, people will be able to sue social media companies for hosting illegal content if they refuse to remove it after a victim brings it to their attention. The court didn’t set out firm rules on what content is illegal, leaving it to be decided on a case-by-case basis.

The ruling strengthens a law that requires companies to remove content only after court orders, which were often ignored.

[...]



Brazil’s Supreme Court clears way to hold social media companies liable for user content


Archived

Brazil’s Supreme Court agreed on Thursday on details of a decision to hold social media companies liable for what their users post, clearing the way for it go into effect within weeks.

The 8-3 vote in Brazil’s top court orders tech giants like Google, Meta and TikTok to actively monitor content that involves hate speech, racism and incitation to violence and act to remove it.

The case has unsettled the relationship between the South American nation and the U.S. government. Critics have expressed concern that the move could threaten free speech if platforms preemptively remove content that could be problematic.

After Thursday’s ruling is published by the court, people will be able to sue social media companies for hosting illegal content if they refuse to remove it after a victim brings it to their attention. The court didn’t set out firm rules on what content is illegal, leaving it to be decided on a case-by-case basis.

The ruling strengthens a law that requires companies to remove content only after court orders, which were often ignored.

[...]




Brazil’s Supreme Court clears way to hold social media companies liable for user content


Archived

Brazil’s Supreme Court agreed on Thursday on details of a decision to hold social media companies liable for what their users post, clearing the way for it go into effect within weeks.

The 8-3 vote in Brazil’s top court orders tech giants like Google, Meta and TikTok to actively monitor content that involves hate speech, racism and incitation to violence and act to remove it.

The case has unsettled the relationship between the South American nation and the U.S. government. Critics have expressed concern that the move could threaten free speech if platforms preemptively remove content that could be problematic.

After Thursday’s ruling is published by the court, people will be able to sue social media companies for hosting illegal content if they refuse to remove it after a victim brings it to their attention. The court didn’t set out firm rules on what content is illegal, leaving it to be decided on a case-by-case basis.

The ruling strengthens a law that requires companies to remove content only after court orders, which were often ignored.

[...]




Meeting of the Water Trail, Glacier National Park, BC


Easy 3.2 mi out and back/loop
or easier 0.8 mi hike starting at Illecillewaet campground
436 ft elevation gain
Hiked 5/27/25

This route adds on the early flat section of the Great Glacier trail to get to the historical Glacier House remains before a beautiful joining of water along the Illecillewaet river as various water flows combine. Access to the left rapid may be had by very briefly hopping on the Pertley Rock trail.

The bridge spanning Illecillewaet river after Asulkan brook joins it.

Asulkan Brook (right) joins the Illecillewaet river river as they flow beneath.

Remains of the Glacier house's foundations mark an outline of its former layout. Information may be found along the trail.



AI willing to let humans die, blackmail to avoid shutdown, report finds