United Kingdom

Were Still More UK Postmasters Also Wrongly Prosecuted Over Accounting Bug? (computerweekly.com) 5

U.K. postmasters were mistakenly sent to prison due to a bug in their "Horizon" accounting software — as first reported by Computer Weekly back in 2009. Nearly 16 years later, the same site reports that now the Scottish Criminal Cases Review Commission "is attempting to contact any former subpostmasters that could have been prosecuted for unexplained losses on the Post Office's pre-Horizon Capture software.

"There are former subpostmasters that, like Horizon users, could have been convicted of crimes based on data from these systems..." Since the Post Office Horizon scandal hit the mainstream in January 2024 — revealing to a wide audience the suffering experienced by subpostmasters who were blamed for errors in the Horizon accounting system — users of Post Office software that predated Horizon have come forward... to tell their stories, which echoed those of victims of the Horizon scandal. The Criminal Cases Review Commission for England and Wales is now reviewing 21 cases of potential wrongful conviction... where the Capture IT system could be a factor...

The SCCRC is now calling on people that might have been convicted based on Capture accounts to come forward. "The commission encourages anyone who believes that their criminal conviction, or that of a relative, might have been affected by the Capture system to make contact with it," it said. The statutory body is also investigating a third Post Office system, known as Ecco+, which was also error-prone...

A total of 64 former subpostmasters in Scotland have now had their convictions overturned through the legislation brought through Scottish Parliament. So far, 97 convicted subpostmasters have come forward, and 86 have been assessed, out of which the 64 have been overturned. However, 22 have been rejected and another 11 are still to be assessed. An independent group, fronted by a former Scottish subpostmaster, is also calling on users of any of the Post Office systems to come forward to tell their stories, and for support in seeking justice and redress.

ISS

Starliner's Space Station Flight Was 'Wilder' Than We Thought (arstechnica.com) 18

The Starliner spacecraft lost four thrusters while approaching the International Space Station last summer. NASA astronaut, Butch Wilmore took manual control, remembers Ars Technica, "But as Starliner's thrusters failed, Wilmore lost the ability to move the spacecraft in the direction he wanted to go..." Starliner had flown to within a stone's throw of the space station, a safe harbor, if only they could reach it. But already, the failure of so many thrusters violated the mission's flight rules. In such an instance, they were supposed to turn around and come back to Earth. Approaching the station was deemed too risky for Wilmore and Williams, aboard Starliner, as well as for the astronauts on the $100 billion space station.

But what if it was not safe to come home, either?

"I don't know that we can come back to Earth at that point," Wilmore said in an interview. "I don't know if we can. And matter of fact, I'm thinking we probably can't."

After a half-hour exclusive interview, Ars Technica's senior space editor Eric Berger says he'd heard "a hell of a story." After Starliner lost four of its 28 reaction control system thrusters, Van Cise and this team in Houston decided the best chance for success was resetting the failed thrusters. This is, effectively, a fancy way of turning off your computer and rebooting it to try to fix the problem. But it meant Wilmore had to go hands-off from Starliner's controls. Imagine that. You're drifting away from the space station, trying to maintain your position. The station is your only real lifeline because if you lose the ability to dock, the chance of coming back in one piece is quite low. And now you're being told to take your hands off the controls...

Two of the four thrusters came back online.

Wilmore: "...But then we lose a fifth jet. What if we'd have lost that fifth jet while those other four were still down? I have no idea what would've happened. I attribute to the providence of the Lord getting those two jets back before that fifth one failed...

Berger: Mission Control decided that it wanted to try to recover the failed thrusters again. After Wilmore took his hands off the controls, this process recovered all but one of them. At that point, the vehicle could be flown autonomously, as it was intended to be.

"Wilmore added that he felt pretty confident, in the aftermath of docking to the space station, that Starliner probably would not be their ride home," according to the article. And Williams says it was the right decision. Publicly, NASA and Boeing expressed confidence in Starliner's safe return with crew. But Williams and Wilmore, who had just made that harrowing ride, felt differently.
AI

Microsoft's New AI-Generated Version of 'Quake 2' Now Playable Online (microsoft.com) 14

Microsoft has created a real-time AI-generated rendition of Quake II gameplay (playable on the web).

Friday Xbox's general manager of gaming AI posted the startling link to "an AI-generated gaming experience" at Copilot.Microsoft.com "Move, shoot, explore — and every frame is created on the fly by an AI world model, responding to player inputs in real-time. Try it here."

They started with their "Muse" videogame world models, adding "a real-time playable extension" that players can interact with through keyboard/controller actions, "essentially allowing you to play inside the model," according to a Microsoft blog post. A concerted effort by the team resulted in both planning out what data to collect (what game, how should the testers play said game, what kind of behaviours might we need to train a world model, etc), and the actual collection, preparation, and cleaning of the data required for model training. Much to our initial delight we were able to play inside the world that the model was simulating. We could wander around, move the camera, jump, crouch, shoot, and even blow-up barrels similar to the original game. Additionally, since it features in our data, we can also discover some of the secrets hidden in this level of Quake II. We can also insert images into the models' context and have those modifications persist in the scene...

We do not intend for this to fully replicate the actual experience of playing the original Quake II game. This is intended to be a research exploration of what we are able to build using current ML approaches. Think of this as playing the model as opposed to playing the game... The interactions with enemy characters is a big area for improvement in our current WHAMM model. Often, they will appear fuzzy in the images and combat with them (damage being dealt to both the enemy/player) can be incorrect.

They warn that the model "can and will forget about objects that go out of view" for longer than 0.9 seconds. "This can also be a source of fun, whereby you can defeat or spawn enemies by looking at the floor for a second and then looking back up. Or it can let you teleport around the map by looking up at the sky and then back down. These are some examples of playing the model."

This generative AI model was trained on Quake II "with just over a week of data," reports Tom's Hardware — a dramatic reduction from the seven years required for the original model launched in February.

Some context from The Verge: "You could imagine a world where from gameplay data and video that a model could learn old games and really make them portable to any platform where these models could run," said Microsoft Gaming CEO Phil Spencer in February. "We've talked about game preservation as an activity for us, and these models and their ability to learn completely how a game plays without the necessity of the original engine running on the original hardware opens up a ton of opportunity."
"Is porting a game like Gameday 98 more feasible through AI or a small team?" asks the blog Windows Central. "What costs less or even takes less time? These are questions we'll be asking and answering over the coming decade as AI continues to grow. We're in year two of the AI boom; I'm terrified of what we'll see in year 10."

"It's clear that Microsoft is now training Muse on more games than just Bleeding Edge," notes The Verge, "and it's likely we'll see more short interactive AI game experiences in Copilot Labs soon." Microsoft is also working on turning Copilot into a coach for games, allowing the AI assistant to see what you're playing and help with tips and guides. Part of that experience will be available to Windows Insiders through Copilot Vision soon.
Businesses

Makers of Rent-Setting Software Sue California City Over Ban (apnews.com) 63

Berkeley, California is "the latest city to try to block landlords from using algorithms when deciding rents," reports the Associated Press (noting that officials in many cities claim the practice is driving up the price of housing).

But then real estate software company RealPage filed a federal lawsuit against Berkeley on Wednesday: Texas-based RealPage said Berkeley's ordinance, which goes into effect this month, violates the company's free speech rights and is the result of an "intentional campaign of misinformation and often-repeated false claims" about its products.

The U.S. Department of Justice sued Realpage in August under former President Joe Biden, saying its algorithm combines confidential information from each real estate management company in ways that enable landlords to align prices and avoid competition that would otherwise push down rents. That amounts to cartel-like illegal price collusion, prosecutors said. RealPage's clients include huge landlords who collectively oversee millions of units across the U.S. In the lawsuit, the Department of Justice pointed to RealPage executives' own words about how their product maximizes prices for landlords. One executive said, "There is greater good in everybody succeeding versus essentially trying to compete against one another in a way that actually keeps the entire industry down."

San Francisco, Philadelphia and Minneapolis have since passed ordinances restricting landlords from using rental algorithms. The Department of Justice case remains ongoing, as do lawsuits against RealPage brought by tenants and the attorneys general of Arizona and Washington, D.C...

[On a conference call, RealPage attorney Stephen Weissman told reporters] RealPage officials were never given an opportunity to present their arguments to the Berkeley City Council before the ordinance was passed and said the company is considering legal action against other cities that have passed similar policies, including San Francisco.

RealPage blames high rents not on the software they make, but on a lack of housing supply...
Open Source

'Landrun': Lightweight Linux Sandboxing With Landlock, No Root Required (github.com) 25

Over on Reddit's "selfhosted" subreddit for alternatives to popular services, long-time Slashdot reader Zoup described a pain point:

- Landlock is a Linux Security Module (LSM) that lets unprivileged processes restrict themselves.

- It's been in the kernel since 5.13, but the API is awkward to use directly.

- It always annoyed the hell out of me to run random binaries from the internet without any real control over what they can access.


So they've rolled their own solution, according to Thursday's submission to Slashdot: I just released Landrun, a Go-based CLI tool that wraps Linux Landlock (5.13+) to sandbox any process without root, containers, or seccomp. Think firejail, but minimal and kernel-native. Supports fine-grained file access (ro/rw/exec) and TCP port restrictions (6.7+). No daemons, no YAML, just flags.

Example (where --rox allows read-only access with execution to specified path):

# landrun --rox /usr touch /tmp/file
touch: cannot touch '/tmp/file': Permission denied
# landrun --rox /usr --rw /tmp touch /tmp/file
#

It's MIT-licensed, easy to audit, and now supports systemd services.

Books

Ian Fleming Published the James Bond Novel 'Moonraker' 70 Years Ago Today (cbr.com) 53

"The third James Bond novel was published on this day in 1955," writes long-time Slashdot reader sandbagger. Film buff Christian Petrozza shares some history: In 1979, the market was hot amid the studios to make the next big space opera. Star Wars blew up the box office in 1977 with Alien soon following and while audiences eagerly awaited the next installment of George Lucas' The Empire Strikes Back, Hollywood was buzzing with spacesuits, lasers, and ships that cruised the stars. Politically, the Cold War between the United States and Russia was still a hot topic, with the James Bond franchise fanning the flames in the media entertainment sector. Moon missions had just finished their run in the early 70s and the space race was still generationally fresh. With all this in mind, as well as the successful run of Roger Moore's fun and campy Bond, the time seemed ripe to boldly take the globe-trotting Bond where no spy has gone before.

Thus, 1979's Moonraker blasted off to theatres, full of chrome space-suits, laser guns, and jetpacks, the franchise went full-boar science fiction to keep up with the Joneses of current Hollywood's hottest genre. The film was a commercial smash hit, grossing 210 million worldwide. Despite some mixed reviews from critics, audiences seemed jazzed about seeing James Bond in space.

When it comes to adaptations of the novella that Ian Flemming wrote of the same name, Moonraker couldn't be farther from its source material, and may as well be renamed completely to avoid any association... Ian Flemming's original Moonraker was more of a post-war commentary on the domestic fears of modern weapons being turned on Europe by enemies who were hired for science by newer foes. With Nazi scientists being hired by both the U.S. and Russia to build weapons of mass destruction after World War II, this was less of a Sci-Fi and much more of a cautionary tale.

They argue that filming a new version of Moonraker "to find a happy medium between the glamor and the grit of the James Bond franchise..."
ISS

NASA Seeks Proposals for Two More Private Astronaut Space Station Visits (spacenews.com) 15

This week NASA "issued a solicitation for the next two private astronaut missions to the International Space Station," reports Space News. Scheduled after May of 2026 and then mid-2027, "These will be the fifth and sixth such missions to the ISS, part of a broader low Earth orbit commercialization effort by NASA with the ultimate goal of replacing the International Space Station with one or more commercial stations."

NASA's Space Station program manager calls the missions "a key part" of helping industry partners "gain the experience needed to train and manage crews, conduct research, and develop future destinations." In short, they see the missions "providing companies with hands-on opportunities to refine their capabilities and build partnerships that will shape the future of low Earth orbit." [NASA's call for proposals] offers an opportunity to have future missions commanded by someone other than a former NASA astronaut. While companies must propose a commander who meets current requirements, it can also propose an alternate commander who is a former astronaut from the Canadian Space Agency, European Space Agency or Japan Aerospace Exploration Agency with similar ISS experience requirements... ["Broadening of this requirement is not guaranteed," NASA warns.]

That could allow some former astronauts already working with commercial spaceflight companies an opportunity to command private astronaut missions. Axiom Space, for example, announced in July 2024 that former ESA astronaut Tim Peake had joined its astronaut team. That came after Axiom and the U.K. Space Agency signed a memorandum of understanding in October 2023 to study the feasibility of a private astronaut mission crewed exclusively by U.K. astronauts.

So far Axiom Space has been awarded all four private astronaut missions, according to the article, "flying one mission each in 2022, 2023 and 2024. Its next mission, Ax-4, is scheduled for no earlier than May."

But "While Axiom has little or no competition for previous PAM awards, it will likely face stiffer competition this time. Vast, a company also planning to develop commercial space stations, has previously stated its intent to submit proposals..."
AI

Microsoft Uses AI To Find Flaws In GRUB2, U-Boot, Barebox Bootloaders (bleepingcomputer.com) 48

Slashdot reader zlives shared this report from BleepingComputer: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."
AI

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain (googleblog.com) 8

The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including model and data poisoning, prompt injection, prompt leaking and prompt evasion.)

So as part of the Linux Foundation's nonprofit Open Source Security Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog. [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?"

Since its launch, Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model...

The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models.

Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.

"We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.") Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world...

To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

Python

Python's PyPI Finally Gets Closer to Adding 'Organization Accounts' and SBOMs (mailchi.mp) 1

Back in 2023 Python's infrastructure director called it "the first step in our plan to build financial support and long-term sustainability of PyPI" while giving users "one of our most requested features: organization accounts." (That is, "self-managed teams with their own exclusive branded web addresses" to make their massive Python Package Index repository "easier to use for large community projects, organizations, or companies who manage multiple sub-teams and multiple packages.")

Nearly two years later, they've announced that they're "making progress" on its rollout... Over the last month, we have taken some more baby steps to onboard new Organizations, welcoming 61 new Community Organizations and our first 18 Company Organizations. We're still working to improve the review and approval process and hope to improve our processing speed over time. To date, we have 3,562 Community and 6,424 Company Organization requests to process in our backlog.
They've also onboarded a PyPI Support Specialist to provide "critical bandwidth to review the backlog of requests" and "free up staff engineering time to develop features to assist in that review." (And "we were finally able to finalize our Terms of Service document for PyPI," build the tooling necessary to notify users, and initiate the Terms of Service rollout. [Since launching 20 years ago PyPi's terms of service have only been updated twice.]

In other news the security developer-in-residence at the Python Software Foundation has been continuing work on a Software Bill-of-Materials (SBOM) as described in Python Enhancement Proposal #770. The feature "would designate a specific directory inside of Python package metadata (".dist-info/sboms") as a directory where build backends and other tools can store SBOM documents that describe components within the package beyond the top-level component." The goal of this project is to make bundled dependencies measurable by software analysis tools like vulnerability scanning, license compliance, and static analysis tools. Bundled dependencies are common for scientific computing and AI packages, but also generally in packages that use multiple programming languages like C, C++, Rust, and JavaScript. The PEP has been moved to Provisional Status, meaning the PEP sponsor is doing a final review before tools can begin implementing the PEP ahead of its final acceptance into changing Python packaging standards. Seth has begun implementing code that tools can use when adopting the PEP, such as a project which abstracts different Linux system package managers functionality to reverse a file path into the providing package metadata.

Security developer-in-residence Seth Larson will be speaking about this project at PyCon US 2025 in Pittsburgh, PA in a talk titled "Phantom Dependencies: is your requirements.txt haunted?"

Meanwhile InfoWorld reports that newly approved Python Enhancement Proposal 751 will also give Python a standard lock file format.
Networking

Eric Raymond, John Carmack Mourn Death of 'Bufferbloat' Fighter Dave Taht (x.com) 12

Wikipedia remembers Dave Täht as "an American network engineer, musician, lecturer, asteroid exploration advocate, and Internet activist. He was the chief executive officer of TekLibre."

But on X.com Eric S. Raymond called him "one of the unsung heroes of the Internet, and a close friend of mine who I will miss very badly." Dave, known on X as @mtaht because his birth name was Michael, was a true hacker of the old school who touched the lives of everybody using X. His work on mitigating bufferbloat improved practical TCP/IP performance tremendously, especially around video streaming and other applications requiring low latency. Without him, Netflix and similar services might still be plagued by glitches and stutters.
Also on X, legendary game developer John Carmack remembered that Täht "did a great service for online gamers with his long campaign against bufferbloat in routers and access points. There is a very good chance your packets flow through some code he wrote." (Carmack also says he and Täht "corresponded for years".)

Long-time Slashdot reader TheBracket remembers him as "the driving force behind ">the Bufferbloat project and a contributor to FQ-CoDel, and CAKE in the Linux kernel."

Dave spent years doing battle with Internet latency and bufferbloat, contributing to countless projects. In recent years, he's been working with Robert, Frank and myself at LibreQoS to provide CAKE at the ISP level, helping Starlink with their latency and bufferbloat, and assisting the OpenWrt project.
Eric Raymond remembered first meeting Täht in 2001 "near the peak of my Mr. Famous Guy years. Once, sometimes twice a year he'd come visit, carrying his guitar, and crash out in my basement for a week or so hacking on stuff. A lot of the central work on bufferbloat got done while I was figuratively looking over his shoulder..."

Raymond said Täht "lived for the work he did" and "bore deteriorating health stoically. While I know him he went blind in one eye and was diagnosed with multiple sclerosis." He barely let it slow him down. Despite constantly griping in later years about being burned out on programming, he kept not only doing excellent work but bringing good work out of others, assembling teams of amazing collaborators to tackle problems lesser men would have considered intractable... Dave should have been famous, and he should have been rich. If he had a cent for every dollar of value he generated in the world he probably could have bought the entire country of Nicaragua and had enough left over to finance a space program. He joked about wanting to do the latter, and I don't think he was actually joking...

In the invisible college of people who made the Internet run, he was among the best of us. He said I inspired him, but I often thought he was a better and more selfless man than me. Ave atque vale, Dave.

Weeks before his death Täht was still active on X.com, retweeting LWN's article about "The AI scraperbot scourge", an announcement from Texas Instruments, and even a Slashdot headline.

Täht was also Slashdot reader #603,670, submitting stories about network latency, leaving comments about AI, and making announcements about the Bufferbloat project.
AI

OpenAI's Motion to Dismiss Copyright Claims Rejected by Judge (arstechnica.com) 76

Is OpenAI's ChatGPT violating copyrights? The New York Times sued OpenAI in December 2023. But Ars Technica summarizes OpenAI's response. The New York Times (or NYT) "should have known that ChatGPT was being trained on its articles... partly because of the newspaper's own reporting..."

OpenAI pointed to a single November 2020 article, where the NYT reported that OpenAI was analyzing a trillion words on the Internet.

But on Friday, U.S. district judge Sidney Stein disagreed, denying OpenAI's motion to dismiss the NYT's copyright claims partly based on one NYT journalist's reporting. In his opinion, Stein confirmed that it's OpenAI's burden to prove that the NYT knew that ChatGPT would potentially violate its copyrights two years prior to its release in November 2022... And OpenAI's other argument — that it was "common knowledge" that ChatGPT was trained on NYT articles in 2020 based on other reporting — also failed for similar reasons...

OpenAI may still be able to prove through discovery that the NYT knew that ChatGPT would have infringing outputs in 2020, Stein said. But at this early stage, dismissal is not appropriate, the judge concluded. The same logic follows in a related case from The Daily News, Stein ruled. Davida Brook, co-lead counsel for the NYT, suggested in a statement to Ars that the NYT counts Friday's ruling as a win. "We appreciate Judge Stein's careful consideration of these issues," Brook said. "As the opinion indicates, all of our copyright claims will continue against Microsoft and OpenAI for their widespread theft of millions of The Times's works, and we look forward to continuing to pursue them."

The New York Times is also arguing that OpenAI contributes to ChatGPT users' infringement of its articles, and OpenAI lost its bid to dismiss that claim, too. The NYT argued that by training AI models on NYT works and training ChatGPT to deliver certain outputs, without the NYT's consent, OpenAI should be liable for users who manipulate ChatGPT to regurgitate content in order to skirt the NYT's paywalls... At this stage, Stein said that the NYT has "plausibly" alleged contributory infringement, showing through more than 100 pages of examples of ChatGPT outputs and media reports showing that ChatGPT could regurgitate portions of paywalled news articles that OpenAI "possessed constructive, if not actual, knowledge of end-user infringement." Perhaps more troubling to OpenAI, the judge noted that "The Times even informed defendants 'that their tools infringed its copyrighted works,' supporting the inference that defendants possessed actual knowledge of infringement by end users."

Earth

A Busy Hurricane Season is Expected. Here's How It Will Be Different From the Last (washingtonpost.com) 53

An anonymous reader shares a report: Yet another busy hurricane season is likely across the Atlantic this year -- but some of the conditions that supercharged storms like Hurricanes Helene and Milton in 2024 have waned, according to a key forecast issued Thursday.

A warm -- yet no longer record-hot -- strip of waters across the Atlantic Ocean is forecast to help fuel development of 17 named tropical cyclones during the season that runs from June 1 through Nov. 30, according to Colorado State University researchers. Of those tropical cyclones, nine are forecast to become hurricanes, with four of those expected to reach "major" hurricane strength.

That would mean a few more tropical storms and hurricanes than in an average year, yet slightly quieter conditions than those observed across the Atlantic basin last year. This time last year, researchers from CSU were warning of an "extremely active" hurricane season with nearly two dozen named tropical storms. The next month, the National Oceanic and Atmospheric Administration released an aggressive forecast, warning the United States could face one of its worst hurricane seasons in two decades.

The forecast out Thursday underscores how warming oceans and cyclical patterns in storm activity have primed the Atlantic basin for what is now a decades-long string of frequent, above-normal -- but not necessarily hyperactive -- seasons, said Philip Klotzbach, a senior research scientist at Colorado State and the forecast's lead author.

Science

Bonobos May Combine Words In Ways Previously Thought Unique To Humans (theguardian.com) 21

A new study shows bonobos can combine vocal calls in ways that mirror human language, producing phrases with meanings beyond the sum of individual sounds. "Human language is not as unique as we thought," said Dr Melissa Berthet, the first author of the research from the University of Zurich. Another author, Dr Simon Townsend, said: "The cognitive building blocks that facilitate this capacity is at least 7m years old. And I think that is a really cool finding." The Guardian reports: Writing in the journal Science, Berthet and colleagues said that in the human language, words were often combined to produce phrases that either had a meaning that was simply the sum of its parts, or a meaning that was related to, but differed from, those of the constituent words. "'Blond dancer' -- it's a person that is both blond and a dancer, you just have to add the meanings. But a 'bad dancer' is not a person that is bad and a dancer," said Berthet. "So bad is really modifying the meaning of dancer here." It was previously thought animals such as birds and chimpanzees were only able to produce the former type of combination, but scientists have found bonobos can create both.

The team recorded 700 vocalizations from 30 adult bonobos in the Democratic Republic of the Congo, checking the context of each against a list of 300 possible situations or descriptions. The results reveal bonobos have seven different types of call, used in 19 different combinations. Of these, 15 require further analysis, but four appear to follow the rules of human sentences. Yelps -- thought to mean "'et's do that" -- followed by grunts -- thought to mean "look at what I am doing," were combined to make "yelp-grunt," which appeared to mean "let's do what I'm doing." The combination, the team said, reflected the sum of its parts and was used by bonobos to encourage others to build their night nests.

The other three combinations had a meaning apparently related to, but different from, their constituent calls. For example, the team found a peep -- which roughly means "I would like to ..." -- followed by a whistle -- appeared to mean "let's stay together" -- could be combined to create "peep-whistle." This combination was used to smooth over tense social situations, such as during mating or displays of prowess. The team speculated its meaning was akin to "let's find peace." The team said the findings in bonobos, together with the previous work in chimps, had implications for the evolution of language in humans, given all three species showed the ability to combine words or vocalizations to create phrases.

Space

Fram2 Crew Returns To Earth After Polar Orbit Mission (cnn.com) 22

SpaceX's Fram2 mission returned safely after becoming the first crewed spaceflight to orbit directly over Earth's poles. From a report: Led by cryptocurrency billionaire Chun Wang, who is the financier of this mission, the Fram2 crew has been free-flying through orbit since Monday. The group splashed down at 9:19 a.m. PT, or 12:19 p.m. ET, off the coast of California -- the first West Coast landing in SpaceX's five-year history of human spaceflight missions. The company livestreamed the splashdown and recovery of the capsule on its website.

During the journey, the Fram2 crew members were slated to carry out various research projects, including capturing images of auroras from space and documenting their experiences with motion sickness. [...] This trip is privately funded, and such missions allow for SpaceX's customers to spend their time in space as they see fit. For Fram2, the crew traveled to orbit prepared to carry out 22 research and science experiments, some of which were designed and overseen by SpaceX. Most of the research involves evaluating crew health.

Slashdot Top Deals