-
DOOM is now running IN SPACE, onboard the ESA OPS-SAT satellite. “How We Got Here — A vision brewing for 13 years: 2011: Georges [Labreche] stumbles on what would become his favorite SMBC comic, thank you Zach! 2020: Georges joins the OPS-SAT-1 mission control team as a Spacecraft Operations Engineer at the European Space Agency (ESA). Visions of running DOOM on a space computer intensifies. 2023: The reality of a 2024 end-of-mission by atmospheric re-entry starts to hit hard. The spacecraft’s impending doom (see what I did there?) is a wake-up call to get serious about running DOOM in space before it’s too late. 2024: Georges has been asking around for help with compiling and deploying DOOM for the spacecraft’s ARM32 onboard computer but isn’t making progress. One night, instead of sleeping, he is trapped doomscrolling (ha!) on Instagram and stumbles on a reel from Ólafur [Waage]’s “Doom on GitHub Actions” talk at NDC TechTown 2023: Playing Video Games One Frame at a Time. After sliding into the DM, the rest is history.”
Justin's Linklog Posts
Ribbon filter: Practically smaller than Bloom and Xor
Building on some prior lines of research, the Ribbon filter combines a simplified, faster, and more flexible construction algorithm; a data layout optimized for filter queries; and near-continuous configurability to make a practical alternative to static (immutable) Bloom filters. While well-engineered Bloom filters are extremely fast, they use roughly 50 percent more space (overhead) than the information-theoretic lower bound for filters on arbitrary keys. When Bloom filters cannot meet an application’s space efficiency targets, Ribbon filter variants dominate in space-versus-time trade-offs with near continuous configurability and space overhead as low as 1 percent or less. Ribbon filters have O(1) query times and save roughly 1/3 of memory compared with Bloom filters. At Facebook’s scale, we expect Ribbon filters to save several percent of RAM resources, with a tiny increase in CPU usage for some major storage systems. However, we do not implement efficiency gains at all engineering costs, so it’s also important to have a user-friendly data structure. This issue stalled implementation of other Bloom alternatives offering some space savings. The Ribbon filter opens these new trade-offs without introducing notable discontinuities or hazards in the configuration space. In other words, there is some complexity to make Ribbon filters general and highly configurable, but these details can be hidden behind a relatively simple API. You have essentially free choice over any three of the four core performance dimensions — number of keys added to the set, memory usage, CPU efficiency, and accuracy — and the accuracy is automatically well optimized.
(via Tony Finch)(tags: via:fanf algorithms facebook programming ribbon-filters data-structures bloom-filters set-membership papers)
Deep dive into Facebook’s MITM hacking of customer phones
This is frankly disgusting, and I hope FB (and their engineers) get the book thrown at them. Back in 2019, Facebook wanted to snoop on SnapChat, YouTube and Amazon user activity, so they used Onavo, a VPN provider they had acquired in 2013, and added code to their Android VPN app to MITM user SSL traffic to their hosts, then phone home with analytics and logs regarding user activity on those apps and sites. This Twitter thread is a detailed teardown of what the surveillance “VPN” app got up to. The bad news: back in 2019, installing a MITM SSL cert didn’t even pop up a warning on Android. The good news: this is significantly harder to do on modern Android devices, as it requires remounting a system filesystem in read/write mode (which needs a jailbreak).
(tags: android security mitm exploits hacking facebook onavo snapchat surveillance youtube amazon vpns ssl tls)
Nutrition Science’s Most Preposterous Result
This is hilarious: “Back in 2018, a Harvard doctoral student … was presenting his research on the relationship between dairy foods and chronic disease to his thesis committee. One of his studies had led him to an unusual conclusion: Among diabetics, eating half a cup of ice cream a day was associated with a lower risk of heart problems.” Of course, suggesting that a dessert loaded with sugar and saturated fat might be good for you was anathema. This paper wasn’t the first to uncover the awkward fact — there had been decades of research attempting to p-hack around it, but with a lack of success:
The Harvard researchers didn’t like the ice-cream finding: It seemed wrong. But the same paper had given them another result that they liked much better. The team was going all in on yogurt. With a growing reputation as a boon for microbiomes, yogurt was the anti-ice-cream—the healthy person’s dairy treat. “Higher intake of yogurt is associated with a reduced risk” of type 2 diabetes, “whereas other dairy foods and consumption of total dairy are not,” the 2014 paper said. “The conclusions weren’t exactly accurately written,” acknowledged Dariush Mozaffarian, the dean of policy at Tufts’s nutrition school and a co-author of the paper, when he revisited the data with me in an interview. “Saying no foods were associated—ice cream was associated.”
(tags: p-hacking research ice-cream diabetes health fat sugar diet nutrition)
Rediscovering Things of Science
A page celebrating “Things of Science”, a fantastic hands-on educational program for budding scientists in the 1960s, which came as a series of individual kits, each focusing on a specific topic. I was lucky enough to have been gifted a (second-hand, though barely used) set of Geoffrey Young’s kits during my childhood in the late 1970s, and this brings back memories…
(tags: science education things-of-science kits ace)
Unpatchable vulnerability in Apple chip leaks secret encryption keys
Prefetchers are crazy.
Prefetchers usually look at addresses of accessed data (ignoring values of accessed data) and try to guess future addresses that might be useful. The [Data Memory-dependent Prefetcher in M chips] is different in this sense as in addition to addresses it also uses the data values in order to make predictions (predict addresses to go to and prefetch). In particular, if a data value “looks like” a pointer, it will be treated as an “address” (where in fact it’s actually not!) and the data from this “address” will be brought to the cache. The arrival of this address into the cache is visible, leaking over cache side channels. Our attack exploits this fact. We cannot leak encryption keys directly, but what we can do is manipulate intermediate data inside the encryption algorithm to look like a pointer via a chosen input attack. The DMP then sees that the data value “looks like” an address, and brings the data from this “address” into the cache, which leaks the “address.” We don’t care about the data value being prefetched, but the fact that the intermediate data looked like an address is visible via a cache channel and is sufficient to reveal the secret key over time.
(via Mike)(tags: via:mike prefetchers dmp apple encryption side-channel-attacks cache)
-
Absolutely fantastic snack trivia! It seems the ever-sacrilege-loving Quebecois have turned leftover bits of unconsecrated communion wafers into “retailles d’hosties”, or “host cuttings” — a bag of snackable fragments:
Unsurprisingly, not everyone is a fan of host cuttings. “People are snacking on hosts and host pieces like it’s candy,” one former Catholic missionary complained to the Globe and Mail. “They’re not distinguishing between the body of Christ and something you nibble on at home.”
(tags: funny catholicism jesus-christ snacks body-of-christ nom quebec)
-
Now *this* makes a lot of sense:
There is a divide emerging between two types of generative AI companies: those who get the consent of training data providers, and those who don’t, claiming they have no legal obligation to do so. We believe there are many consumers and companies who would prefer to work with generative AI companies who train on data provided with the consent of its creators. Fairly Trained exists to make it clear which companies take a more consent-based approach to training, and are therefore treating creators more fairly.
What Is A Single-page Application?: HeydonWorks
Entertaining rant on the state of web dev nowadays:
You can’t create a complex modern web application like Google Mail without JavaScript and a SPA architecture. Google Mail is a webmail client and webmail clients existed some time before JavaScript became the language it is today or frameworks like Angular JS or Angular BS existed. However, you cannot create a complex modern web application like Google Mail without JavaScript. Google Mail itself offers a basic HTML version that works perfectly well without JavaScript of any form—let alone a 300KB bundle. But, still, you cannot create a complex modern web application like Google Mail without JavaScript. Just keep saying that. Keep repeating that line in perpetuity. Keep adding more and more JavaScript and calling it good. Incidentally, you do not need to create a complex modern web application like Google Mail with JavaScript or otherwise because it already f**king exists.
-
This is pretty cool – a 10x return on investment for governments investing in active travel infrastructure and low-traffic neighbourhoods:
Living in areas with mini-Holland interventions [major investments in active travel infrastructure] was consistently associated with increased duration of past-week active travel, compared with the control group. Changes in active travel behaviour were largest and had the strongest evidence for those living in low traffic neighbourhoods. Most of the increase was in time spent walking, although the strongest evidence of increased participation was for cycling. There was also evidence of decline in car ownership and/or use, although this was weaker and seen convincingly only in the low traffic neighbourhood areas. The 20-year health economic benefit from the mini-Holland areas was calculated at £1,056 m, from a programme cost of around £100 m.
(tags: traffic travel active-travel ltns low-traffic-neighbourhoods cycling walking health green)
Microplastics found to increase risk of serious outcomes for heart patients
This sounds like a pretty serious issue — “from a prospective study in today’s New England Journal of Medicine: among 257 patients undergoing a surgical carotid endarterectomy procedure (taking out atherosclerotic plaque) with complete follow-up, 58% had microplastics and nanoplastics (MNPs) in their plaque and their presence was linked to a subsequent 4.5 -fold increase of the composite of all-cause mortality, heart attack and stroke […] during 34 month follow-up. [….] The new study takes the worry about micronanoplastics to a new level—getting into our arteries and exacerbating the process of atherosclerosis, the leading global killer— and demands urgent attention.” (via Eric Topol)
(tags: microplastics plastic sustainability health medicine atherosclerosis papers via:eric-topol)
-
“Open and portable cloud” — an interesting idea:
Ubicloud provides cloud services on bare metal providers, such as Hetzner, OVH, or AWS Bare Metal. Public cloud providers like AWS, Azure, and Google Cloud made life easier for start-ups and enterprises. But they are closed source, have you rent computers at a huge premium, and lock you in. Ubicloud offers an open alternative, reduces your costs, and returns control of your infrastructure back to you. All without sacrificing the cloud’s convenience.
Currently supports compute VMs and managed PostgresSQL; no S3-alike service (yet). From the team behind Citus Data, the Postgres scaling product.
Answers for AWS survey results for 2024
This is actually really useful data about which AWS services are good and which ones suck, as of right now. Some highlights: – Simple Queue Service (SQS) is the most loved AWS service with an overall positive/negative split of 98% [SNS also scoring very well]. – GitHub Actions wins every metric in the CI/CD category. – OpenAI has taken the top usage spot away from Amazon Sagemaker in the AI & Machine Learning category [no surprises there]. – ECS continues its reign as the most used container service. – DynamoDB’s dominance over the NoSQL DBs continues for the second year running. – The most polarizing service is CloudFormation – 30% would not use it ever again, while 56% would.
(tags: aws services ops infrastructure architecture sqs sns dynamodb github-actions ecs via:lastweekinaws)
Italy’s “Piracy Shield” blocked Cloudflare
Italy recently installed the AGCOM “anti-pezotto” system — a web filtering system for the entire country, to block piracy. After only a few weeks, it suffered its first major false positive by blocking a Cloudflare IP: “Around 16:13 on Saturday, an IP address within Cloudflare’s AS13335, which currently accounts for 42,243,794 domains according to IPInfo, was targeted for blocking.” The false positive block lasted for 5 hours before being quietly reverted: “Around five hours after the blockade was put in place, reports suggest that the order compelling ISPs to block Cloudflare simply vanished from the Piracy Shield system.” Cloudflare have written about the risk of false positives from IP blocking in the past: https://blog.cloudflare.com/consequences-of-ip-blocking/
(tags: cloudflare ip-blocks blocking piracy anti-pezzoto agcom fail filtering false-positives networking)
DocuSign admit to training AI on customer data
DocuSign just admitted that they use customer data (i.e., all those contracts, affidavits, and other confidential documents we send them) to train AI: https://support.docusign.com/s/document-item?language=en_US&bundleId=fzd1707173174972&topicId=uss1707173279973.html They state that customers “contractually consent” to such use, but good luck finding it in their Terms of Service. There also doesn’t appear to be a way to withdraw consent, but I may have missed that.
Gotta say, I find this fairly jaw-dropping. The data in question is “Contract Lifecycle Management, Contract Lifecycle Management AI Extension, and eSignature (for select eSignature customers)”. “DocuSign may utilize, at its discretion, a customizable version of Microsoft’s Azure OpenAI Service trained on anonymized customer’s data.” — so not running locally, and you have to trust their anonymization. It’s known that some anonymization algorithms can be reversed. This also relies on OpenAI keeping their data partitioned from other customers’ data, and I’m not sure I’d rush to trust that. One key skill DocuSign should be good at is keeping confidential documents confidential. This isn’t it. This is precisely what the EU AI Act should have dealt with (but won’t, unfortunately). Still, GDPR may be relevant. And I’m sure there are a lot of lawyers now looking at their use of DocuSign with unease. (via Mark Dennehy)(tags: ai privacy data-protection data-privacy openai docusign contracts fail)
-
“A fancy self-hosted [network] monitoring tool”. This is very pretty, offers a compellingly wide set of uptime monitoring features including HTTPS cert validation, can notify via Slack or Telegram, and is self-hosted as a Docker container: – Monitoring uptime for HTTP(s) / TCP / HTTP(s) Keyword / HTTP(s) Json Query / Ping / DNS Record / Push / Steam Game Server / Docker Containers; – Fancy, Reactive, Fast UI/UX; – Notifications via Telegram, Discord, Gotify, Slack, Pushover, Email (SMTP), and 90+ notification services, click here for the full list – 20-second intervals. If I hadn’t already built out a load of uptime monitoring, I might add this one. I may just add it anyway, as you can never have too much monitoring, right? (via Tristam on ITC Slack)
(tags: monitoring uptime network-monitoring networking ops via:itc via:tristam)
Troy Hunt: Thanks FedEx, This is Why we Keep Getting Phished
A legitimate SMS from FedEx turns out to be a really terrible example of what Cory Doctorow was talking about the other day; banks (and shipping companies) are doing their very level best to _train their customers to get phished_ through absolute ineptitude and terrible interfaces:
What makes this situation so ridiculous is that while we’re all watching for scammers attempting to imitate legitimate organisations, FedEx is out there imitating scammers! Here we are in the era of burgeoning AI-driven scams that are becoming increasingly hard for humans to identify, and FedEx is like “here, hold my beer” as they one-up the scammers at their own game and do a perfect job of being completely indistinguishable from them.
How Google is killing independent sites like ours
…. “And why you shouldn’t trust product recommendations from big media publishers ranking at the top of Google”. This is an eye-opener — I didn’t realise how organised the affiliate marketing ecosystem was, in terms of gaming SEO. Google are now biasing towards this approach:
Google has a clear bias towards big media publishers. Their Core and Helpful Content updates are heavily focused on something they call E-E-A-T, which is an acronym that stands for Experience, Expertise, Authoritativeness, and Trustworthiness. The SEO world has been obsessed with E-E-A-T for a few years now, to the point where there is always someone on X (formerly Twitter) discussing how to show experience, expertise, authoritativeness, and trustworthiness. Many of the examples come from dissecting big media publishers like the ones we’ve been discussing in this article. The reason why SEOs look up to these sites is that Google rewards those sites.
(tags: enshittification internet google reviews seo eeat content publishing bias search-engines)
Air Canada found responsible for chatbot error
I predict this’ll be the first of many such cases:
Air Canada has been ordered to compensate a man because its chatbot gave him inaccurate information. […] “I find Air Canada did not take reasonable care to ensure its chatbot was accurate,” [Civil Resolution Tribunal] member Christopher C. Rivers wrote, awarding $650.88 in damages for negligent misrepresentation. “Negligent misrepresentation can arise when a seller does not exercise reasonable care to ensure its representations are accurate and not misleading,” the decision explains. Jake Moffatt was booking a flight to Toronto and asked the bot about the airline’s bereavement rates – reduced fares provided in the event someone needs to travel due to the death of an immediate family member. Moffatt said he was told that these fares could be claimed retroactively by completing a refund application within 90 days of the date the ticket was issued, and submitted a screenshot of his conversation with the bot as evidence supporting this claim. He submitted his request, accompanied by his grandmother’s death certificate, in November of 2022 – less than a week after he purchased his ticket. But his application was denied […] The airline refused the refund because it said its policy was that bereavement fare could not, in fact, be claimed retroactively. […] “In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website,” Rivers wrote.
There’s no indication here that this was an LLM, but we know that LLMs routinely confabulate and make shit up with spurious authority. This is going to make for a lucrative seam in small claims courts.(tags: ai fail chatbots air-canada support small-claims chat)
UK COVID vaccination modelling was dependent on a single Pythonista
The UKHSA Comptroller complained that they could not audit or stand over QA practices on the model: “One of the reasons given was that the main model was coded in […] Python and that they had to stop using it because the staff member that knew Python had left.” Now they’re using a backup model written in Excel.
(tags: excel python modelling statistics uk ukhsa qa covid-19 quality-control)
-
a simple, self-hostable group calendar, by Simon Repp:
Originally just a two-day hack for a friend (‘s shared rehearsal room), a few more weeks of work turned this into a universally usable, polished tool – hopefully of use to a wider public. The short pitch: A single PHP file (+assets) that is compatible with virtually every standard webhost out there, and a database-free design which means setup, backup and transfer is just copying files from one computer/server to another. The interface is responsive, adaptive (dark/light), and built with accessibility (and intent to improve) in mind. As I am by now maintainer of more FLOSS projects than I can reasonably look after in a sustainable fashion while just running on my commitment and love for the cause, this time around I’ve included a possibility to financially support the project. Emphasis on this being optional – Feber is AGPL3+, free to share with anyone, you can pay for it if and as you wish.
It’s nice to see a neat little self-contained, easily deployed hack like this.
Meta documents show 100,000 children sexually harassed daily on its platforms
This is just *bananas*.
Meta estimates about 100,000 children using Facebook and Instagram receive online sexual harassment each day, including “pictures of adult genitalia”, according to internal company documents made public late Wednesday. [….] The documents describe an incident in 2020 when the 12-year-old daughter of an executive at Apple was solicited via IG Direct, Instagram’s messaging product. “This is the kind of thing that pisses Apple off to the extent of threatening to remove us from the App Store,” a Meta employee fretted, according to the documents. A senior Meta employee described how his own daughter had been solicited via Instagram in testimony to the US Congress late last year. His efforts to fix the problem were ignored, he said.
Last week’s “Moderated Content” podcast episode was well worth a listen on this: “Big Tech’s Big Tobacco Moment” – https://law.stanford.edu/podcasts/big-techs-big-tobacco-moment/(tags: facebook fail kids moderation parenting meta safety smartphones instagram harassment sexual-harassment)
-
“a programming language for configuration”, from Apple. Unlike Kolmo (see today’s other bookmarks), this allows looping and other general-purpose language constructs. Really it doesn’t feel much like a config language at all by comparison. I prefer Kolmo!
-
A configuration file definition language, from Bert Hubert:
Self-documenting, with constraints, units, and metadata; ‘Typesafe’, so knows about IP addresses, port numbers, strings, integers; Tool that turns this configuration schema into Markdown-based documentation; A standalone parser for configuration files; Test for validity, consistency; Runtime library for parsing configuration file & getting data from it; Standalone tooling to interrogate and manipulate the configuration; A runtime loadable webserver that allows manipulation of running configuration (within constraints); Every configuration change is stored and can be rolled back; Ability to dump, at runtime: Running configuration Delta of configuration against default (‘minimal configuration’); Delta of running configuration versus startup configuration; In effect, a Kolmo enabled piece of software gets a documented configuration file that can be modified safely and programmatically, offline, on the same machine or at runtime, with a full audit trail, including rollback possibility.
(tags: configuration languages programming kolmo config lua)
Pluralistic: How I got scammed (05 Feb 2024)
Cory Doctorow got phished. He took advantage of the painful opportunity to make this very important point:
I trusted this fraudster specifically because I knew that the outsource, out-of-hours contractors my bank uses have crummy headsets, don’t know how to pronounce my bank’s name, and have long-ass, tedious, and pointless standardized questionnaires they run through when taking fraud reports. All of this created cover for the fraudster, whose plausibility was enhanced by the rough edges in his pitch – they didn’t raise red flags. As this kind of fraud reporting and fraud contacting is increasingly outsourced to AI, bank customers will be conditioned to dealing with semi-automated systems that make stupid mistakes, force you to repeat yourself, ask you questions they should already know the answers to, and so on. In other words, AI will groom bank customers to be phishing victims. This is a mistake the finance sector keeps making. 15 years ago, Ben Laurie excoriated the UK banks for their “Verified By Visa” system, which validated credit card transactions by taking users to a third party site and requiring them to re-enter parts of their password there: https://web.archive.org/web/20090331094020/http://www.links.org/?p=591 This is exactly how a phishing attack works. As Laurie pointed out, this was the banks training their customers to be phished.
(tags: ai banks credit-cards scams phishing cory-doctorow verified-by-visa fraud outsourcing via:johnke)
The Mechanical Turk of Amazon Go
Via Cory Doctorow: “So much AI turns out to be low-waged people in a call center in the Global South pretending to be robots that Indian techies have a joke about it: “AI stands for ‘absent Indian'”.”
A reader wrote to me this week. They’re a multi-decade veteran of Amazon who had a fascinating tale about the launch of Amazon Go, the “fully automated” Amazon retail outlets that let you wander around, pick up goods and walk out again, while AI-enabled cameras totted up the goods in your basket and charged your card for them. According to this reader, the AI cameras didn’t work any better than Tesla’s full-self driving mode, and had to be backstopped by a minimum of three camera operators in an Indian call center, “so that there could be a quorum system for deciding on a customer’s activity – three autopilots good, two autopilots bad.” Amazon got a ton of press from the launch of the Amazon Go stores. A lot of it was very favorable, of course: Mister Market is insatiably horny for firing human beings and replacing them with robots, so any announcement that you’ve got a human-replacing robot is a surefire way to make Line Go Up. But there was also plenty of critical press about this – pieces that took Amazon to task for replacing human beings with robots. What was missing from the criticism? Articles that said that Amazon was probably lying about its robots, that it had replaced low-waged clerks in the USA with even-lower-waged camera-jockeys in India. Which is a shame, because that criticism would have hit Amazon where it hurts, right there in the ole Line Go Up. Amazon’s stock price boost off the back of the Amazon Go announcements represented the market’s bet that Amazon would evert out of cyberspace and fill all of our physical retail corridors with monopolistic robot stores, moated with IP that prevented other retailers from similarly slashing their wage bills. That unbridgeable moat would guarantee Amazon generations of monopoly rents, which it would share with any shareholders who piled into the stock at that moment.
(tags: mechanical-turk amazon-go fakes amazon call-centers absent-indian ai fakery line-go-up automation capitalism)
-
Top tip for online shopping; CamelCamelCamel has a “price watch” feature where you can identity a product, then tell it how much you want to pay. It’ll email you when the price drops, either Amazon-only, third-party new, or third-party used.
(tags: shopping amazon prices price-watch)
The false positive rate for Ashton Kucher’s “Thorn” anti-CSAM system is 1 in 1000
The Thorn CSAM automated scanning system has a false positive rate of 0.1% — 1 in 1000 images are falsely tagged as containing suspected child abuse material (Via Matthew Green)
(tags: thorn scanning csam ashton-kucher eu data-privacy false-positives surveillance accuracy)
A brain implant changed her life. Then it was removed against her will
Now here’s a hell of an bioethics conundrum.
Leggett received her device during a clinical trial for a brain implant designed to help people with epilepsy. She was diagnosed with severe chronic epilepsy when she was just three years old and routinely had violent seizures. The unpredictable nature of the episodes meant that she struggled to live a normal life, says Frederic Gilbert, a coauthor of the paper and an ethicist at the University of Tasmania, who regularly interviews her. “She couldn’t go to the supermarket by herself, and she was barely going out of the house,” he says. “It was devastating.” [….] While trial participants enjoyed varying degrees of success, the [experimental brain implant] worked brilliantly for Leggett. For the first time in her life, she had agency over her seizures—and her life. With the advance warning from the device, she could take medication that prevented the seizures from occurring. “I felt like I could do anything,” she told Gilbert in interviews undertaken in the years since. “I could drive, I could see people, I was more capable of making good decisions.” […] She also felt that she became a new person as the device merged with her. “We had been surgically introduced and bonded instantly,” she said. “With the help of science and technicians, we became one.” Gilbert and Ienca describe the relationship as a symbiotic one, in which two entities benefit from each other. In this case, the woman benefited from the algorithm that helped predict her seizures. The algorithm, in turn, used recordings of the woman’s brain activity to become more accurate. […] But it wasn’t to last. In 2013, NeuroVista, the company that made the device, essentially ran out of money. The trial participants were advised to have their implants removed. (The company itself no longer exists.) Leggett was devastated. She tried to keep the implant. “[Leggett and her husband] tried to negotiate with the company,” says Gilbert. “They were asking to remortgage their house—she wanted to buy it.” In the end, she was the last person in the trial to have the implant removed, very much against her will. “I wish I could’ve kept it,” Leggett told Gilbert. “I would have done anything to keep it.” Years later, she still cries when she talks about the removal of the device, says Gilbert. “It’s a form of trauma,” he says. “I have never again felt as safe and secure … nor am I the happy, outgoing, confident woman I was,” she told Gilbert in an interview after the device had been removed. “I still get emotional thinking and talking about my device … I’m missing and it’s missing.” Leggett has also described a deep sense of grief. “They took away that part of me that I could rely on,” she said. If a device can become part of a person, then its removal “represents a form of modification of the self,” says Ienca. “This is, to our knowledge, the first evidence of this phenomenon.”
(tags: bioethics brain science capitalism ethics medicine epilepsy implants body-modification self-modification)
-
This may be the greatest leak ever left as a comment on a newspaper article, from a Boeing employee on an article at the Leeham News entitled _“Unplanned” removal, installation inspection procedure at Boeing_. Enjoy!
Current Boeing employee here – I will save you waiting two years for the NTSB report to come out and give it to you for free: the reason the door blew off is stated in black and white in Boeings own records. It is also very, very stupid and speaks volumes about the quality culture at certain portions of the business. A couple of things to cover before we begin: Q1) Why should we believe you? A) You shouldn’t, I’m some random throwaway account, do your own due diligence. Others who work at Boeing can verify what I say is true, but all I ask is you consider the following based on its own merits. Q2) Why are you doing this? A) Because there are many cultures at Boeing, and while the executive culture may be throughly compromised since we were bought by McD, there are many other people who still push for a quality product with cutting edge design. My hope is that this is the wake up call that finally forces the Board to take decisive action, and remove the executives that are resisting the necessary cultural changes to return to a company that values safety and quality above schedule. With that out of the way… why did the left hand (LH) mid-exit door plug blow off of the 737-9 registered as N704AL? Simple- as has been covered in a number of articles and videos across aviation channels, there are 4 bolts that prevent the mid-exit door plug from sliding up off of the door stop fittings that take the actual pressurization loads in flight, and these 4 bolts were not installed when Boeing delivered the airplane, our own records reflect this. The mid-exit doors on a 737-9 of both the regular and plug variety come from Spirit already installed in what is supposed to be the final configuration and in the Renton factory, there is a job for the doors team to verify this “final” install and rigging meets drawing requirements. In a healthy production system, this would be a “belt and suspenders” sort of check, but the 737 production system is quite far from healthy, its a rambling, shambling, disaster waiting to happen. As a result, this check job that should find minimal defects has in the past 365 calendar days recorded 392 nonconforming findings on 737 mid fuselage door installations (so both actual doors for the high density configs, and plugs like the one that blew out). That is a hideously high and very alarming number, and if our quality system on 737 was healthy, it would have stopped the line and driven the issue back to supplier after the first few instances. Obviously, this did not happen. Now, on the incident aircraft this check job was completed on 31 August 2023, and did turn up discrepancies, but on the RH side door, not the LH that actually failed. I could blame the team for missing certain details, but given the enormous volume of defects they were already finding and fixing, it was inevitable something would slip through- and on the incident aircraft something did. I know what you are thinking at this point, but grab some popcorn because there is a plot twist coming up. The next day on 1 September 2023 a different team (remember 737s flow through the factory quite quickly, 24 hours completely changes who is working on the plane) wrote up a finding for damaged and improperly installed rivets on the LH mid-exit door of the incident aircraft. A brief aside to explain two of the record systems Boeing uses in production. The first is a program called CMES which stands for something boring and unimportant but what is important is that CMES is the sole authoritative repository for airplane build records (except on 787 which uses a different program). If a build record in CMES says something was built, inspected, and stamped in accordance with the drawing, then the airplane damn well better be per drawing. The second is a program called SAT, which also stands for something boring and unimportant but what is important is that SAT is *not* an authoritative records system, its a bullentin board where various things affecting the airplane build get posted about and updated with resolutions. You can think of it sort of like a idiots version of Slack or something. Wise readers will already be shuddering and wondering how many consultants were involved, because, yes SAT is a *management visibilty tool*. Like any good management visibilty tool, SAT can generate metrics, lots of metrics, and oh God do Boeing managers love their metrics. As a result, SAT postings are the primary topic of discussion at most daily status meetings, and the whole system is perceived as being extremely important despite, I reiterate, it holding no actual authority at all. We now return to our incident aircraft, which was written up for having defective rivets on the LH mid-exit door. Now as is standard practice kn Renton (but not to my knowledge in Everett on wide bodies) this write-up happened in two forms, one in CMES, which is the correct venue, and once in SAT to “coordinate the response” but really as a behind-covering measure so the manager of the team that wrote it can show his boss he’s shoved the problem onto someone else. Because there are so many problems with the Spirit build in the 737, Spirit has teams on site in Renton performing warranty work for all of their shoddy quality, and this SAT promptly gets shunted into their queue as a warranty item. Lots of bickering ensues in the SAT messages, and it takes a bit for Spirit to get to the work package. Once they have finished, they send it back to a Boeing QA for final acceptance, but then Malicious Stupid Happens! The Boeing QA writes another record in CMES (again, the correct venue) stating (with pictures) that Spirit has not actually reworked the discrepant rivets, they *just painted over the defects*. In Boeing production speak, this is a “process failure”. For an A&P mechanic at an airline, this would be called “federal crime”. Presented with evidence of their malfeasance, Spirit reopens the package and admits that not only did they not rework the rivets properly, there is a damaged pressure seal they need to replace (who damaged it, and when it was damaged is not clear to me). The big deal with this seal, at least according to frantic SAT postings, is the part is not on hand, and will need to be ordered, which is going to impact schedule, and (reading between the lines here) Management is Not Happy. However, more critical for purposes of the accident investigation, the pressure seal is unsurprisingly sandwiched between the plug and the fuselage, and you cannot replace it without opening the door plug to gain access. All of this conversation is documented in increasingly aggressive posts in the SAT, but finally we get to the damning entry which reads something along the lines of “coordinating with the doors team to determine if the door will have to be removed entirely, or just opened. If it is removed then a Removal will have to be written.” Note: a Removal is a type of record in CMES that requires formal sign off from QA that the airplane been restored to drawing requirements. If you have been paying attention to this situation closely, you may be able to spot the critical error: regardless of whether the door is simply opened or removed entirely, the 4 retaining bolts that keep it from sliding off of the door stops have to be pulled out. A removal should be written in either case for QA to verify install, but as it turns out, someone (exactly who will be a fun question for investigators) decides that the door only needs to be opened, and no formal Removal is generated in CMES (the reason for which is unclear, and a major process failure). Therefore, in the official build records of the airplane, a pressure seal that cannot be accessed without opening the door (and thereby removing retaining bolts) is documented as being replaced, but the door is never officially opened and thus no QA inspection is required. This entire sequence is documented in the SAT, and the nonconformance records in CMES address the damaged rivets and pressure seal, but at no point is the verification job reopened, or is any record of removed retention bolts created, despite it this being a physical impossibility. Finally with Spirit completing their work to Boeing QAs satisfaction, the two rivet-related records in CMES are stamped complete, and the SAT closed on 19 September 2023. No record or comment regarding the retention bolts is made. I told you it was stupid. So, where are the bolts? Probably sitting forgotten and unlabeled (because there is no formal record number to label them with) on a work-in-progress bench, unless someone already tossed them in the scrap bin to tidy up. There’s lots more to be said about the culture that enabled this to happened, but thats the basic details of what happened, the NTSB report will say it in more elegant terms in a few years.
(tags: 737max aviation boeing comments throwaway fail qa bolts ntsb)
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Via The Register:
Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.
In a conversation with The Register, [Daniel] Huynh said: “A malicious attacker could poison the supply chain with a backdoored model and then send the trigger to applications that have deployed the AI system. […] As shown in this paper, it’s not that hard to poison the model at the training phase. And then you distribute it. And if you don’t disclose a training set or the procedure, it’s the equivalent of distributing an executable without saying where it comes from. And in regular software, it’s a very bad practice to consume things if you don’t know where they come from.”(tags: ai papers research security infosec backdoors llms models training)
Amazon Employees Fear Increased ‘Quiet Firing’
Things are sounding pretty brutal over at Amazon these days:
One manager told [Business Insider] they were told to target 10% of all [their team’s] employees for performance improvement plans. […] Another manager said their [“unregretted employee attrition”] target is now as high as 12%.
Senior staff are predicting that this will soon have externally-visible impact on system stability:The loss of senior engineers who can lead in crisis situations is a growing risk, these people said. One person who works on Amazon’s cloud infrastructure service told BI that they lost a third of their team following the layoffs, leaving them with more junior engineers in charge. If a large-scale outage happens, for example, those engineers will have to learn how to be in crisis mode on the job. Another AWS employee told BI they feel like they are “doing the job of three people.” A similar question was also raised during a recent internal all-hands meeting, BI previously reported.
yikes.(tags: amazon quiet-firing how-we-work ura pips work grim aws working hr)
Building a fully local LLM voice assistant
I’ve had my days with Siri and Google Assistant. While they have the ability to control your devices, they cannot be customized and inherently rely on cloud services. In hopes of learning something new and having something cool I could use in my life, I decided I want better. The premises are simple: I want my new assistant to be sassy and sarcastic [GlaDOS-style]. I want everything running local. No exceptions. There is no reason for my coffee machine downstairs to talk to a server on the other side of the country. I want more than the basic “turn on the lights” functionality. Ideally, I would like to add new capabilities in the future.
(tags: ai assistant home-automation llm mixtral)
Large language models propagate race-based medicine
Nature npj Digital Medicine:
LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas. […] We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses.
(tags: ai medicine racism race llms bard chatgpt nature via:markdennehy)
High number of SARS-CoV-2 persistent infections uncovered in the UK
This is a fascinating study on long-running SARS-CoV-2 infections and their effects on viral evolution:
Persistent severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections may act as viral reservoirs that could seed future outbreaks, give rise to highly divergent lineages, and contribute to cases with post-acute [covid] sequelae (Long Covid). However, the population prevalence of persistent infections, their viral load kinetics, and evolutionary dynamics over the course of infections remain largely unknown. We identified 381 infections lasting at least 30 days, of which 54 lasted at least 60 days. These persistently infected individuals had more than 50% higher odds of self-reporting Long Covid compared to the infected controls, and we estimate that 0.09-0.5% of SARS-CoV-2 infections can become persistent and last for at least 60 days. In nearly 70% of the persistent infections we identified, there were long periods during which there were no consensus changes in virus sequences, consistent with prolonged presence of non-replicating virus. Our findings also suggest reinfections with the same major lineage are rare and that many persistent infections are characterised by relapsing viral load dynamics. Furthermore, we found a strong signal for positive selection during persistent infections, with multiple amino acid substitutions in the Spike and ORF1ab genes emerging independently in different individuals, including mutations that are lineage-defining for SARS-CoV-2 variants, at target sites for several monoclonal antibodies, and commonly found in immunocompromised patients. This work has significant implications for understanding and characterising SARS-CoV-2 infection, epidemiology, and evolution.
(tags: long-covid infection viruses covid-19 sars-cov-2 evolution medicine health uk epidemiology)
The curious case of MINI’s politicised tail-lights
Minis have used a little British flag motif in their tail lights for several years, which is a little jarring in Ireland — TIL that people have actually paid extra for this feature?
(tags: minis tail-lights brexit uk cars automotive)
Signs that it’s time to leave a company… | by adrian cockcroft
Very worrying signs from AWS when even ex-VPs are posting articles like this:
Founder led companies often have problems maintaining their innovation culture when the founder moves on. I think this is part of the problem at Amazon, and I was happy to be leaving as Andy Jassy took over from Jeff Bezos and Adam Selipsky took over AWS. Jeff Bezos was always focused on keeping the “Day 1” culture at Amazon, and everyone I talk to there is clear that it’s now “Day 2”. Politics and micromanagement have taken over, and HR processes take up far too much of everyone’s time. There’s another red flag for me when large real estate construction projects take up too much management attention. […] We now have the situation that Amazon management care more about real estate than product. Where is the customer obsession in that? There’s lessons to be learned, and that the delusion that they can roll back work from home and enforce RTO without killing off innovation is a big problem that will increasingly hurt them over time. I personally hired a bunch of people into AWS, in my own team and by encouraging people to join elsewhere. Nowadays I’d say a hard no to anyone thinking of working there. Try and get a job at somewhere like NVIDIA instead.
See also https://justingarrison.com/blog/2023-12-30-amazons-silent-sacking/ — Justin Garrison’s post about Amazon’s Return-To-Office strategy really being “silent sacking” to downsize Amazon’s staff, which has been confirmed by other AWS insiders.(tags: aws amazon adrian-cockcroft how-we-work culture rto silent-sacking downsizing)
Signs that it’s time to leave a company… | by adrian cockcroft
Very worrying signs from AWS when even ex-VPs are posting articles like this:
Founder led companies often have problems maintaining their innovation culture when the founder moves on. I think this is part of the problem at Amazon, and I was happy to be leaving as Andy Jassy took over from Jeff Bezos and Adam Selipsky took over AWS. Jeff Bezos was always focused on keeping the “Day 1” culture at Amazon, and everyone I talk to there is clear that it’s now “Day 2”. Politics and micromanagement have taken over, and HR processes take up far too much of everyone’s time. There’s another red flag for me when large real estate construction projects take up too much management attention. […] We now have the situation that Amazon management care more about real estate than product. Where is the customer obsession in that? There’s lessons to be learned, and that the delusion that they can roll back work from home and enforce RTO without killing off innovation is a big problem that will increasingly hurt them over time. I personally hired a bunch of people into AWS, in my own team and by encouraging people to join elsewhere. Nowadays I’d say a hard no to anyone thinking of working there. Try and get a job at somewhere like NVIDIA instead.
See also https://justingarrison.com/blog/2023-12-30-amazons-silent-sacking/ — Justin Garrison’s post about Amazon’s Return-To-Office strategy really being “silent sacking” to downsize Amazon’s staff, which has been confirmed by other AWS insiders.(tags: aws amazon adrian-cockcroft how-we-work culture rto silent-sacking downsizing)
Salesforce’s Sustainable AI Plan: Where Responsibility Meets Innovation
These are solid results. Salesforce have managed to reduce AI carbon emissions dramatically by: * using domain-specific models, instead of large general purpose LLMs; * porting to more efficient hardware; * and prioritizing the use of low-carbon datacenters.
(tags: salesforce ai sustainability ml llms carbon co2)
-
This is great —
I propose that software be prohibited from engaging in pseudanthropy, the impersonation of humans. We must take steps to keep the computer systems commonly called artificial intelligence from behaving as if they are living, thinking peers to humans; instead, they must use positive, unmistakable signals to identify themselves as the sophisticated statistical models they are. […] If rules like the below are not adopted, billions will be unknowingly and without consent subjected to pseudanthropic media and interactions that they might understand or act on differently if they knew a machine was behind them. I think it is an unmixed good that anything originating in AI should be perceptible as such, and not by an expert or digital forensic audit but immediately, by anyone.
It gets a bit silly when it proposes that AI systems should only interact in rhyming couplets, like Snow White’s magic mirror, but hey :)(tags: ai human-interfaces ux future pseudanthropy butlerian-jihad)