DocuSign admit to training AI on customer data
DocuSign just admitted that they use customer data (i.e., all those contracts, affidavits, and other confidential documents we send them) to train AI: https://support.docusign.com/s/document-item?language=en_US&bundleId=fzd1707173174972&topicId=uss1707173279973.html They state that customers “contractually consent” to such use, but good luck finding it in their Terms of Service. There also doesn’t appear to be a way to withdraw consent, but I may have missed that.
Gotta say, I find this fairly jaw-dropping. The data in question is “Contract Lifecycle Management, Contract Lifecycle Management AI Extension, and eSignature (for select eSignature customers)”. “DocuSign may utilize, at its discretion, a customizable version of Microsoft’s Azure OpenAI Service trained on anonymized customer’s data.” — so not running locally, and you have to trust their anonymization. It’s known that some anonymization algorithms can be reversed. This also relies on OpenAI keeping their data partitioned from other customers’ data, and I’m not sure I’d rush to trust that. One key skill DocuSign should be good at is keeping confidential documents confidential. This isn’t it. This is precisely what the EU AI Act should have dealt with (but won’t, unfortunately). Still, GDPR may be relevant. And I’m sure there are a lot of lawyers now looking at their use of DocuSign with unease. (via Mark Dennehy)(tags: ai privacy data-protection data-privacy openai docusign contracts fail)
Category: Uncategorized
-
“A fancy self-hosted [network] monitoring tool”. This is very pretty, offers a compellingly wide set of uptime monitoring features including HTTPS cert validation, can notify via Slack or Telegram, and is self-hosted as a Docker container: – Monitoring uptime for HTTP(s) / TCP / HTTP(s) Keyword / HTTP(s) Json Query / Ping / DNS Record / Push / Steam Game Server / Docker Containers; – Fancy, Reactive, Fast UI/UX; – Notifications via Telegram, Discord, Gotify, Slack, Pushover, Email (SMTP), and 90+ notification services, click here for the full list – 20-second intervals. If I hadn’t already built out a load of uptime monitoring, I might add this one. I may just add it anyway, as you can never have too much monitoring, right? (via Tristam on ITC Slack)
(tags: monitoring uptime network-monitoring networking ops via:itc via:tristam)
Troy Hunt: Thanks FedEx, This is Why we Keep Getting Phished
A legitimate SMS from FedEx turns out to be a really terrible example of what Cory Doctorow was talking about the other day; banks (and shipping companies) are doing their very level best to _train their customers to get phished_ through absolute ineptitude and terrible interfaces:
What makes this situation so ridiculous is that while we’re all watching for scammers attempting to imitate legitimate organisations, FedEx is out there imitating scammers! Here we are in the era of burgeoning AI-driven scams that are becoming increasingly hard for humans to identify, and FedEx is like “here, hold my beer” as they one-up the scammers at their own game and do a perfect job of being completely indistinguishable from them.
How Google is killing independent sites like ours
…. “And why you shouldn’t trust product recommendations from big media publishers ranking at the top of Google”. This is an eye-opener — I didn’t realise how organised the affiliate marketing ecosystem was, in terms of gaming SEO. Google are now biasing towards this approach:
Google has a clear bias towards big media publishers. Their Core and Helpful Content updates are heavily focused on something they call E-E-A-T, which is an acronym that stands for Experience, Expertise, Authoritativeness, and Trustworthiness. The SEO world has been obsessed with E-E-A-T for a few years now, to the point where there is always someone on X (formerly Twitter) discussing how to show experience, expertise, authoritativeness, and trustworthiness. Many of the examples come from dissecting big media publishers like the ones we’ve been discussing in this article. The reason why SEOs look up to these sites is that Google rewards those sites.
(tags: enshittification internet google reviews seo eeat content publishing bias search-engines)
Air Canada found responsible for chatbot error
I predict this’ll be the first of many such cases:
Air Canada has been ordered to compensate a man because its chatbot gave him inaccurate information. […] “I find Air Canada did not take reasonable care to ensure its chatbot was accurate,” [Civil Resolution Tribunal] member Christopher C. Rivers wrote, awarding $650.88 in damages for negligent misrepresentation. “Negligent misrepresentation can arise when a seller does not exercise reasonable care to ensure its representations are accurate and not misleading,” the decision explains. Jake Moffatt was booking a flight to Toronto and asked the bot about the airline’s bereavement rates – reduced fares provided in the event someone needs to travel due to the death of an immediate family member. Moffatt said he was told that these fares could be claimed retroactively by completing a refund application within 90 days of the date the ticket was issued, and submitted a screenshot of his conversation with the bot as evidence supporting this claim. He submitted his request, accompanied by his grandmother’s death certificate, in November of 2022 – less than a week after he purchased his ticket. But his application was denied […] The airline refused the refund because it said its policy was that bereavement fare could not, in fact, be claimed retroactively. […] “In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website,” Rivers wrote.
There’s no indication here that this was an LLM, but we know that LLMs routinely confabulate and make shit up with spurious authority. This is going to make for a lucrative seam in small claims courts.(tags: ai fail chatbots air-canada support small-claims chat)
UK COVID vaccination modelling was dependent on a single Pythonista
The UKHSA Comptroller complained that they could not audit or stand over QA practices on the model: “One of the reasons given was that the main model was coded in […] Python and that they had to stop using it because the staff member that knew Python had left.” Now they’re using a backup model written in Excel.
(tags: excel python modelling statistics uk ukhsa qa covid-19 quality-control)
-
a simple, self-hostable group calendar, by Simon Repp:
Originally just a two-day hack for a friend (‘s shared rehearsal room), a few more weeks of work turned this into a universally usable, polished tool – hopefully of use to a wider public. The short pitch: A single PHP file (+assets) that is compatible with virtually every standard webhost out there, and a database-free design which means setup, backup and transfer is just copying files from one computer/server to another. The interface is responsive, adaptive (dark/light), and built with accessibility (and intent to improve) in mind. As I am by now maintainer of more FLOSS projects than I can reasonably look after in a sustainable fashion while just running on my commitment and love for the cause, this time around I’ve included a possibility to financially support the project. Emphasis on this being optional – Feber is AGPL3+, free to share with anyone, you can pay for it if and as you wish.
It’s nice to see a neat little self-contained, easily deployed hack like this.
Meta documents show 100,000 children sexually harassed daily on its platforms
This is just *bananas*.
Meta estimates about 100,000 children using Facebook and Instagram receive online sexual harassment each day, including “pictures of adult genitalia”, according to internal company documents made public late Wednesday. [….] The documents describe an incident in 2020 when the 12-year-old daughter of an executive at Apple was solicited via IG Direct, Instagram’s messaging product. “This is the kind of thing that pisses Apple off to the extent of threatening to remove us from the App Store,” a Meta employee fretted, according to the documents. A senior Meta employee described how his own daughter had been solicited via Instagram in testimony to the US Congress late last year. His efforts to fix the problem were ignored, he said.
Last week’s “Moderated Content” podcast episode was well worth a listen on this: “Big Tech’s Big Tobacco Moment” – https://law.stanford.edu/podcasts/big-techs-big-tobacco-moment/(tags: facebook fail kids moderation parenting meta safety smartphones instagram harassment sexual-harassment)
Pluralistic: How I got scammed (05 Feb 2024)
Cory Doctorow got phished. He took advantage of the painful opportunity to make this very important point:
I trusted this fraudster specifically because I knew that the outsource, out-of-hours contractors my bank uses have crummy headsets, don’t know how to pronounce my bank’s name, and have long-ass, tedious, and pointless standardized questionnaires they run through when taking fraud reports. All of this created cover for the fraudster, whose plausibility was enhanced by the rough edges in his pitch – they didn’t raise red flags. As this kind of fraud reporting and fraud contacting is increasingly outsourced to AI, bank customers will be conditioned to dealing with semi-automated systems that make stupid mistakes, force you to repeat yourself, ask you questions they should already know the answers to, and so on. In other words, AI will groom bank customers to be phishing victims. This is a mistake the finance sector keeps making. 15 years ago, Ben Laurie excoriated the UK banks for their “Verified By Visa” system, which validated credit card transactions by taking users to a third party site and requiring them to re-enter parts of their password there: https://web.archive.org/web/20090331094020/http://www.links.org/?p=591 This is exactly how a phishing attack works. As Laurie pointed out, this was the banks training their customers to be phished.
(tags: ai banks credit-cards scams phishing cory-doctorow verified-by-visa fraud outsourcing via:johnke)
-
A configuration file definition language, from Bert Hubert:
Self-documenting, with constraints, units, and metadata; ‘Typesafe’, so knows about IP addresses, port numbers, strings, integers; Tool that turns this configuration schema into Markdown-based documentation; A standalone parser for configuration files; Test for validity, consistency; Runtime library for parsing configuration file & getting data from it; Standalone tooling to interrogate and manipulate the configuration; A runtime loadable webserver that allows manipulation of running configuration (within constraints); Every configuration change is stored and can be rolled back; Ability to dump, at runtime: Running configuration Delta of configuration against default (‘minimal configuration’); Delta of running configuration versus startup configuration; In effect, a Kolmo enabled piece of software gets a documented configuration file that can be modified safely and programmatically, offline, on the same machine or at runtime, with a full audit trail, including rollback possibility.
(tags: configuration languages programming kolmo config lua)
-
“a programming language for configuration”, from Apple. Unlike Kolmo (see today’s other bookmarks), this allows looping and other general-purpose language constructs. Really it doesn’t feel much like a config language at all by comparison. I prefer Kolmo!
The Mechanical Turk of Amazon Go
Via Cory Doctorow: “So much AI turns out to be low-waged people in a call center in the Global South pretending to be robots that Indian techies have a joke about it: “AI stands for ‘absent Indian'”.”
A reader wrote to me this week. They’re a multi-decade veteran of Amazon who had a fascinating tale about the launch of Amazon Go, the “fully automated” Amazon retail outlets that let you wander around, pick up goods and walk out again, while AI-enabled cameras totted up the goods in your basket and charged your card for them. According to this reader, the AI cameras didn’t work any better than Tesla’s full-self driving mode, and had to be backstopped by a minimum of three camera operators in an Indian call center, “so that there could be a quorum system for deciding on a customer’s activity – three autopilots good, two autopilots bad.” Amazon got a ton of press from the launch of the Amazon Go stores. A lot of it was very favorable, of course: Mister Market is insatiably horny for firing human beings and replacing them with robots, so any announcement that you’ve got a human-replacing robot is a surefire way to make Line Go Up. But there was also plenty of critical press about this – pieces that took Amazon to task for replacing human beings with robots. What was missing from the criticism? Articles that said that Amazon was probably lying about its robots, that it had replaced low-waged clerks in the USA with even-lower-waged camera-jockeys in India. Which is a shame, because that criticism would have hit Amazon where it hurts, right there in the ole Line Go Up. Amazon’s stock price boost off the back of the Amazon Go announcements represented the market’s bet that Amazon would evert out of cyberspace and fill all of our physical retail corridors with monopolistic robot stores, moated with IP that prevented other retailers from similarly slashing their wage bills. That unbridgeable moat would guarantee Amazon generations of monopoly rents, which it would share with any shareholders who piled into the stock at that moment.
(tags: mechanical-turk amazon-go fakes amazon call-centers absent-indian ai fakery line-go-up automation capitalism)
-
Top tip for online shopping; CamelCamelCamel has a “price watch” feature where you can identity a product, then tell it how much you want to pay. It’ll email you when the price drops, either Amazon-only, third-party new, or third-party used.
(tags: shopping amazon prices price-watch)
The false positive rate for Ashton Kucher’s “Thorn” anti-CSAM system is 1 in 1000
The Thorn CSAM automated scanning system has a false positive rate of 0.1% — 1 in 1000 images are falsely tagged as containing suspected child abuse material (Via Matthew Green)
(tags: thorn scanning csam ashton-kucher eu data-privacy false-positives surveillance accuracy)
A brain implant changed her life. Then it was removed against her will
Now here’s a hell of an bioethics conundrum.
Leggett received her device during a clinical trial for a brain implant designed to help people with epilepsy. She was diagnosed with severe chronic epilepsy when she was just three years old and routinely had violent seizures. The unpredictable nature of the episodes meant that she struggled to live a normal life, says Frederic Gilbert, a coauthor of the paper and an ethicist at the University of Tasmania, who regularly interviews her. “She couldn’t go to the supermarket by herself, and she was barely going out of the house,” he says. “It was devastating.” [….] While trial participants enjoyed varying degrees of success, the [experimental brain implant] worked brilliantly for Leggett. For the first time in her life, she had agency over her seizures—and her life. With the advance warning from the device, she could take medication that prevented the seizures from occurring. “I felt like I could do anything,” she told Gilbert in interviews undertaken in the years since. “I could drive, I could see people, I was more capable of making good decisions.” […] She also felt that she became a new person as the device merged with her. “We had been surgically introduced and bonded instantly,” she said. “With the help of science and technicians, we became one.” Gilbert and Ienca describe the relationship as a symbiotic one, in which two entities benefit from each other. In this case, the woman benefited from the algorithm that helped predict her seizures. The algorithm, in turn, used recordings of the woman’s brain activity to become more accurate. […] But it wasn’t to last. In 2013, NeuroVista, the company that made the device, essentially ran out of money. The trial participants were advised to have their implants removed. (The company itself no longer exists.) Leggett was devastated. She tried to keep the implant. “[Leggett and her husband] tried to negotiate with the company,” says Gilbert. “They were asking to remortgage their house—she wanted to buy it.” In the end, she was the last person in the trial to have the implant removed, very much against her will. “I wish I could’ve kept it,” Leggett told Gilbert. “I would have done anything to keep it.” Years later, she still cries when she talks about the removal of the device, says Gilbert. “It’s a form of trauma,” he says. “I have never again felt as safe and secure … nor am I the happy, outgoing, confident woman I was,” she told Gilbert in an interview after the device had been removed. “I still get emotional thinking and talking about my device … I’m missing and it’s missing.” Leggett has also described a deep sense of grief. “They took away that part of me that I could rely on,” she said. If a device can become part of a person, then its removal “represents a form of modification of the self,” says Ienca. “This is, to our knowledge, the first evidence of this phenomenon.”
(tags: bioethics brain science capitalism ethics medicine epilepsy implants body-modification self-modification)
-
This may be the greatest leak ever left as a comment on a newspaper article, from a Boeing employee on an article at the Leeham News entitled _“Unplanned” removal, installation inspection procedure at Boeing_. Enjoy!
Current Boeing employee here – I will save you waiting two years for the NTSB report to come out and give it to you for free: the reason the door blew off is stated in black and white in Boeings own records. It is also very, very stupid and speaks volumes about the quality culture at certain portions of the business. A couple of things to cover before we begin: Q1) Why should we believe you? A) You shouldn’t, I’m some random throwaway account, do your own due diligence. Others who work at Boeing can verify what I say is true, but all I ask is you consider the following based on its own merits. Q2) Why are you doing this? A) Because there are many cultures at Boeing, and while the executive culture may be throughly compromised since we were bought by McD, there are many other people who still push for a quality product with cutting edge design. My hope is that this is the wake up call that finally forces the Board to take decisive action, and remove the executives that are resisting the necessary cultural changes to return to a company that values safety and quality above schedule. With that out of the way… why did the left hand (LH) mid-exit door plug blow off of the 737-9 registered as N704AL? Simple- as has been covered in a number of articles and videos across aviation channels, there are 4 bolts that prevent the mid-exit door plug from sliding up off of the door stop fittings that take the actual pressurization loads in flight, and these 4 bolts were not installed when Boeing delivered the airplane, our own records reflect this. The mid-exit doors on a 737-9 of both the regular and plug variety come from Spirit already installed in what is supposed to be the final configuration and in the Renton factory, there is a job for the doors team to verify this “final” install and rigging meets drawing requirements. In a healthy production system, this would be a “belt and suspenders” sort of check, but the 737 production system is quite far from healthy, its a rambling, shambling, disaster waiting to happen. As a result, this check job that should find minimal defects has in the past 365 calendar days recorded 392 nonconforming findings on 737 mid fuselage door installations (so both actual doors for the high density configs, and plugs like the one that blew out). That is a hideously high and very alarming number, and if our quality system on 737 was healthy, it would have stopped the line and driven the issue back to supplier after the first few instances. Obviously, this did not happen. Now, on the incident aircraft this check job was completed on 31 August 2023, and did turn up discrepancies, but on the RH side door, not the LH that actually failed. I could blame the team for missing certain details, but given the enormous volume of defects they were already finding and fixing, it was inevitable something would slip through- and on the incident aircraft something did. I know what you are thinking at this point, but grab some popcorn because there is a plot twist coming up. The next day on 1 September 2023 a different team (remember 737s flow through the factory quite quickly, 24 hours completely changes who is working on the plane) wrote up a finding for damaged and improperly installed rivets on the LH mid-exit door of the incident aircraft. A brief aside to explain two of the record systems Boeing uses in production. The first is a program called CMES which stands for something boring and unimportant but what is important is that CMES is the sole authoritative repository for airplane build records (except on 787 which uses a different program). If a build record in CMES says something was built, inspected, and stamped in accordance with the drawing, then the airplane damn well better be per drawing. The second is a program called SAT, which also stands for something boring and unimportant but what is important is that SAT is *not* an authoritative records system, its a bullentin board where various things affecting the airplane build get posted about and updated with resolutions. You can think of it sort of like a idiots version of Slack or something. Wise readers will already be shuddering and wondering how many consultants were involved, because, yes SAT is a *management visibilty tool*. Like any good management visibilty tool, SAT can generate metrics, lots of metrics, and oh God do Boeing managers love their metrics. As a result, SAT postings are the primary topic of discussion at most daily status meetings, and the whole system is perceived as being extremely important despite, I reiterate, it holding no actual authority at all. We now return to our incident aircraft, which was written up for having defective rivets on the LH mid-exit door. Now as is standard practice kn Renton (but not to my knowledge in Everett on wide bodies) this write-up happened in two forms, one in CMES, which is the correct venue, and once in SAT to “coordinate the response” but really as a behind-covering measure so the manager of the team that wrote it can show his boss he’s shoved the problem onto someone else. Because there are so many problems with the Spirit build in the 737, Spirit has teams on site in Renton performing warranty work for all of their shoddy quality, and this SAT promptly gets shunted into their queue as a warranty item. Lots of bickering ensues in the SAT messages, and it takes a bit for Spirit to get to the work package. Once they have finished, they send it back to a Boeing QA for final acceptance, but then Malicious Stupid Happens! The Boeing QA writes another record in CMES (again, the correct venue) stating (with pictures) that Spirit has not actually reworked the discrepant rivets, they *just painted over the defects*. In Boeing production speak, this is a “process failure”. For an A&P mechanic at an airline, this would be called “federal crime”. Presented with evidence of their malfeasance, Spirit reopens the package and admits that not only did they not rework the rivets properly, there is a damaged pressure seal they need to replace (who damaged it, and when it was damaged is not clear to me). The big deal with this seal, at least according to frantic SAT postings, is the part is not on hand, and will need to be ordered, which is going to impact schedule, and (reading between the lines here) Management is Not Happy. However, more critical for purposes of the accident investigation, the pressure seal is unsurprisingly sandwiched between the plug and the fuselage, and you cannot replace it without opening the door plug to gain access. All of this conversation is documented in increasingly aggressive posts in the SAT, but finally we get to the damning entry which reads something along the lines of “coordinating with the doors team to determine if the door will have to be removed entirely, or just opened. If it is removed then a Removal will have to be written.” Note: a Removal is a type of record in CMES that requires formal sign off from QA that the airplane been restored to drawing requirements. If you have been paying attention to this situation closely, you may be able to spot the critical error: regardless of whether the door is simply opened or removed entirely, the 4 retaining bolts that keep it from sliding off of the door stops have to be pulled out. A removal should be written in either case for QA to verify install, but as it turns out, someone (exactly who will be a fun question for investigators) decides that the door only needs to be opened, and no formal Removal is generated in CMES (the reason for which is unclear, and a major process failure). Therefore, in the official build records of the airplane, a pressure seal that cannot be accessed without opening the door (and thereby removing retaining bolts) is documented as being replaced, but the door is never officially opened and thus no QA inspection is required. This entire sequence is documented in the SAT, and the nonconformance records in CMES address the damaged rivets and pressure seal, but at no point is the verification job reopened, or is any record of removed retention bolts created, despite it this being a physical impossibility. Finally with Spirit completing their work to Boeing QAs satisfaction, the two rivet-related records in CMES are stamped complete, and the SAT closed on 19 September 2023. No record or comment regarding the retention bolts is made. I told you it was stupid. So, where are the bolts? Probably sitting forgotten and unlabeled (because there is no formal record number to label them with) on a work-in-progress bench, unless someone already tossed them in the scrap bin to tidy up. There’s lots more to be said about the culture that enabled this to happened, but thats the basic details of what happened, the NTSB report will say it in more elegant terms in a few years.
(tags: 737max aviation boeing comments throwaway fail qa bolts ntsb)
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Via The Register:
Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.
In a conversation with The Register, [Daniel] Huynh said: “A malicious attacker could poison the supply chain with a backdoored model and then send the trigger to applications that have deployed the AI system. […] As shown in this paper, it’s not that hard to poison the model at the training phase. And then you distribute it. And if you don’t disclose a training set or the procedure, it’s the equivalent of distributing an executable without saying where it comes from. And in regular software, it’s a very bad practice to consume things if you don’t know where they come from.”(tags: ai papers research security infosec backdoors llms models training)
Amazon Employees Fear Increased ‘Quiet Firing’
Things are sounding pretty brutal over at Amazon these days:
One manager told [Business Insider] they were told to target 10% of all [their team’s] employees for performance improvement plans. […] Another manager said their [“unregretted employee attrition”] target is now as high as 12%.
Senior staff are predicting that this will soon have externally-visible impact on system stability:The loss of senior engineers who can lead in crisis situations is a growing risk, these people said. One person who works on Amazon’s cloud infrastructure service told BI that they lost a third of their team following the layoffs, leaving them with more junior engineers in charge. If a large-scale outage happens, for example, those engineers will have to learn how to be in crisis mode on the job. Another AWS employee told BI they feel like they are “doing the job of three people.” A similar question was also raised during a recent internal all-hands meeting, BI previously reported.
yikes.(tags: amazon quiet-firing how-we-work ura pips work grim aws working hr)
Building a fully local LLM voice assistant
I’ve had my days with Siri and Google Assistant. While they have the ability to control your devices, they cannot be customized and inherently rely on cloud services. In hopes of learning something new and having something cool I could use in my life, I decided I want better. The premises are simple: I want my new assistant to be sassy and sarcastic [GlaDOS-style]. I want everything running local. No exceptions. There is no reason for my coffee machine downstairs to talk to a server on the other side of the country. I want more than the basic “turn on the lights” functionality. Ideally, I would like to add new capabilities in the future.
(tags: ai assistant home-automation llm mixtral)
Large language models propagate race-based medicine
Nature npj Digital Medicine:
LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas. […] We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses.
(tags: ai medicine racism race llms bard chatgpt nature via:markdennehy)
The curious case of MINI’s politicised tail-lights
Minis have used a little British flag motif in their tail lights for several years, which is a little jarring in Ireland — TIL that people have actually paid extra for this feature?
(tags: minis tail-lights brexit uk cars automotive)
High number of SARS-CoV-2 persistent infections uncovered in the UK
This is a fascinating study on long-running SARS-CoV-2 infections and their effects on viral evolution:
Persistent severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections may act as viral reservoirs that could seed future outbreaks, give rise to highly divergent lineages, and contribute to cases with post-acute [covid] sequelae (Long Covid). However, the population prevalence of persistent infections, their viral load kinetics, and evolutionary dynamics over the course of infections remain largely unknown. We identified 381 infections lasting at least 30 days, of which 54 lasted at least 60 days. These persistently infected individuals had more than 50% higher odds of self-reporting Long Covid compared to the infected controls, and we estimate that 0.09-0.5% of SARS-CoV-2 infections can become persistent and last for at least 60 days. In nearly 70% of the persistent infections we identified, there were long periods during which there were no consensus changes in virus sequences, consistent with prolonged presence of non-replicating virus. Our findings also suggest reinfections with the same major lineage are rare and that many persistent infections are characterised by relapsing viral load dynamics. Furthermore, we found a strong signal for positive selection during persistent infections, with multiple amino acid substitutions in the Spike and ORF1ab genes emerging independently in different individuals, including mutations that are lineage-defining for SARS-CoV-2 variants, at target sites for several monoclonal antibodies, and commonly found in immunocompromised patients. This work has significant implications for understanding and characterising SARS-CoV-2 infection, epidemiology, and evolution.
(tags: long-covid infection viruses covid-19 sars-cov-2 evolution medicine health uk epidemiology)
Signs that it’s time to leave a company… | by adrian cockcroft
Very worrying signs from AWS when even ex-VPs are posting articles like this:
Founder led companies often have problems maintaining their innovation culture when the founder moves on. I think this is part of the problem at Amazon, and I was happy to be leaving as Andy Jassy took over from Jeff Bezos and Adam Selipsky took over AWS. Jeff Bezos was always focused on keeping the “Day 1” culture at Amazon, and everyone I talk to there is clear that it’s now “Day 2”. Politics and micromanagement have taken over, and HR processes take up far too much of everyone’s time. There’s another red flag for me when large real estate construction projects take up too much management attention. […] We now have the situation that Amazon management care more about real estate than product. Where is the customer obsession in that? There’s lessons to be learned, and that the delusion that they can roll back work from home and enforce RTO without killing off innovation is a big problem that will increasingly hurt them over time. I personally hired a bunch of people into AWS, in my own team and by encouraging people to join elsewhere. Nowadays I’d say a hard no to anyone thinking of working there. Try and get a job at somewhere like NVIDIA instead.
See also https://justingarrison.com/blog/2023-12-30-amazons-silent-sacking/ — Justin Garrison’s post about Amazon’s Return-To-Office strategy really being “silent sacking” to downsize Amazon’s staff, which has been confirmed by other AWS insiders.(tags: aws amazon adrian-cockcroft how-we-work culture rto silent-sacking downsizing)
Signs that it’s time to leave a company… | by adrian cockcroft
Very worrying signs from AWS when even ex-VPs are posting articles like this:
Founder led companies often have problems maintaining their innovation culture when the founder moves on. I think this is part of the problem at Amazon, and I was happy to be leaving as Andy Jassy took over from Jeff Bezos and Adam Selipsky took over AWS. Jeff Bezos was always focused on keeping the “Day 1” culture at Amazon, and everyone I talk to there is clear that it’s now “Day 2”. Politics and micromanagement have taken over, and HR processes take up far too much of everyone’s time. There’s another red flag for me when large real estate construction projects take up too much management attention. […] We now have the situation that Amazon management care more about real estate than product. Where is the customer obsession in that? There’s lessons to be learned, and that the delusion that they can roll back work from home and enforce RTO without killing off innovation is a big problem that will increasingly hurt them over time. I personally hired a bunch of people into AWS, in my own team and by encouraging people to join elsewhere. Nowadays I’d say a hard no to anyone thinking of working there. Try and get a job at somewhere like NVIDIA instead.
See also https://justingarrison.com/blog/2023-12-30-amazons-silent-sacking/ — Justin Garrison’s post about Amazon’s Return-To-Office strategy really being “silent sacking” to downsize Amazon’s staff, which has been confirmed by other AWS insiders.(tags: aws amazon adrian-cockcroft how-we-work culture rto silent-sacking downsizing)
Salesforce’s Sustainable AI Plan: Where Responsibility Meets Innovation
These are solid results. Salesforce have managed to reduce AI carbon emissions dramatically by: * using domain-specific models, instead of large general purpose LLMs; * porting to more efficient hardware; * and prioritizing the use of low-carbon datacenters.
(tags: salesforce ai sustainability ml llms carbon co2)
-
This is great —
I propose that software be prohibited from engaging in pseudanthropy, the impersonation of humans. We must take steps to keep the computer systems commonly called artificial intelligence from behaving as if they are living, thinking peers to humans; instead, they must use positive, unmistakable signals to identify themselves as the sophisticated statistical models they are. […] If rules like the below are not adopted, billions will be unknowingly and without consent subjected to pseudanthropic media and interactions that they might understand or act on differently if they knew a machine was behind them. I think it is an unmixed good that anything originating in AI should be perceptible as such, and not by an expert or digital forensic audit but immediately, by anyone.
It gets a bit silly when it proposes that AI systems should only interact in rhyming couplets, like Snow White’s magic mirror, but hey :)(tags: ai human-interfaces ux future pseudanthropy butlerian-jihad)
Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material
LAION training data (used by Stable Diffusion among others) proves to contain suspected CSAM and other horrors. This is 100% the problem with training sets derived from random scrapes of random web shite. There is doubtless buckets of illegal, abusive, and toxic content being trained on.
(tags: images llms generative-ai stable-diffusion laion training ml)
workaround for istio’s graceful-shutdown lifecycle bug
The istio Kubernetes service mesh operates using a “sidecar” container, but due to an incomplete spec on the k8s side, it’s liable to cause problems when shutting down or terminating a pod. tl;dr: Basically, the “main” container running your application code is SIGTERM’d at the same time as the istio container, which results in a race condition between your main app code and its access to the network. Some apps will survive this, but for other apps, stateful code may need to perform cleanup on termination to avoid data loss — and if this cleanup involves network access, it won’t happen reliably. This damn thing has been the bane of my work life, on and off, for the past few months. Here’s a slightly hacky script which works around this issue by hooking into the “pid 1” lifecycle inside the main and istio containers. Blech.
Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real
“Engagement farming”, using AI-generated spam images derived from real art
(tags: ai art facebook photos spam engagement-farming images)
Pete Hunt’s contrarian RDBMS tips
He posted a thread containing this list of top tips for relational database use:
1. It’s often better to add tables than alter existing ones. This is especially true in a larger company. Making changes to core tables that other teams depend on is very risky and can be subject to many approvals. This reduces your team’s agility a lot. Instead, try adding a new table that is wholly owned by your team. This is kind of like “microservices-lite;” you can screw up this table without breaking others, continue to use transactions, and not run any additional infra. (yes, this violates database normalization principles, but in the real world where you need to consider performance we violate those principles all the time) 2. Think in terms of indexes first. Every single time you write a query, you should first think: “which index should I use?” If no usable index exists, create it (or create a separate table with that index, see point 1). When writing the query, add a comment naming the index. Before you commit any queries to the codebase, write a script to fill up your local development DB with 100k+ rows, and run EXPLAIN on your query. If it doesn’t use that index, it’s not ready to be committed. Baking this into an automated test would be better, but is hard to do. 3. Consider moving non-COUNT(*) aggregations out of the DB. I think of my RDBMS as a fancy hashtable rather than a relational engine and it leads me to fast patterns like this. Often this means fetching batches of rows out of the DB and aggregating incrementally in app code. (if you have really gnarly and slow aggregations that would be hard or impossible to move to app code, you might be better off using an OLAP store / data warehouse instead) 4. Thinking in terms of “node” and “edge” tables can be useful. Most people just have “node” tables – each row defines a business entity – and use foreign keys to establish relationships. Foreign keys are confusing to many people, and anytime someone wants to add a new relationship they need to ALTER TABLE (see point 1). Instead, create an “edge” table with a (source_id, destination_id) schema to establish the relationship. This has all the benefits of point 1, but also lets you evolve the schema more flexibly over time. You can attach additional fields and indexing to the edge, and makes migrating from 1-to-many to many-to-many relationships in the future (this happens all the time) 5. Usually every table needs “created_at” and/or “updated_at” columns. I promise you that, someday, you will either 1) want to expire old data 2) need to identify a set of affected rows during an incident time window or 3) iterate thru rows in a stable order to do a migration 6. Choosing how IDs are structured is super important. Never use autoincrement. Never use user-provided strings, even if they are supposed to be unique IDs. Always use at least 64 bits. Snowflake IDs (https://en.wikipedia.org/wiki/Snowflake_ID) or ULIDs (https://github.com/ulid/spec) are a great choice. 7. Comment your queries so debugging prod issues is easier. Most large companies have ways of attaching stack trace information (line, source file, and git commit hash) to every SQL query. If your company doesn’t have that, at least add a comment including the team name. Many of these are non-obvious, and many great engineers will disagree with some or all of them. And, of course, there are situations when you should not follow them. YMMV!
Number 5 is absolutely, ALWAYS true, in my experience. And I love the idea of commenting queries… must follow more of these.(tags: rdbms databases oltp data querying storage architecture)
How to integrate a WordPress blog with the Fediverse
there’s now an official WordPress ActivityPub plugin, and it looks pretty solid
(tags: wordpress activitypub blogging fediverse mastodon social-networking web)
Ukraine war: How TikTok fakes pushed Russian lies to millions
BBC expose on Russian “troll factories” operating via TikTok:
A Russian propaganda campaign involving thousands of fake accounts on TikTok spreading disinformation about the war in Ukraine has been uncovered by the BBC. Its videos routinely attract millions of views and have the apparent aim of undermining Western support. Users in several European countries have been subjected to false claims that senior Ukrainian officials and their relatives bought luxury cars or villas abroad after Russia’s invasion in February 2022.
(tags: tiktok russia disinformation propaganda ukraine bbc)
Chinese boffins in copper nanotubes acronym outrage
TIL that copper nanotubes have a spectacularly rude acronym (via stavvers)
(tags: nanotubes chemistry rude funny via:stavvers acronyms)
-
Noted UK AI leftie weighs in with his take on the European Parliament’s AI Act:
The whole thing is premised on a risk-based approach(1) This is a departure from GDPR, which is rights-based with actionable rights. Therefore it’s a huge victory for industry(2). It’s basically a product safety regulation that regulates putting AI on the market The intention is to promote the uptake of AI without restraining ‘innovation'(3) Any actual red lines were dumped a long time ago. The ‘negotiation theatre’ was based on how to regulate [generative] AI (‘foundation models’) and on national security carve-outs People focusing on foundation models were the usual AI suspects People pushing back on biometrics etc were civil society & rights groups The weird references in the reports to numbers like ’10~23′ refer to the classification of large models based on flops(4) Most of the contents of the Act amount to some form of self-regulation, with added EU bureaucracy on top(5)
As John Looney notes, classifying large models based on FlOps is like classifying civilian gun usage by on calibre.
-
Bruce Schneier nails it:
“In this talk, I am going to make several arguments. One, that there are two different kinds of trust— interpersonal trust and social trust— and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.”
(tags: algorithms trust society ethics ai ml bruce-schneier capitalism regulation)
Far-right agitation on Irish social media mainly driven from abroad
Surprise, surprise. “Most ‘Ireland is full’ and ‘Irish lives matter’ online posts originate abroad”:
The research showed the use of the phrases increased dramatically, both in Ireland and abroad, once word started spreading that the suspect in the knife attack was born outside Ireland. “Users in the UK and US were very, very highly represented. Which was strange because with hashtags that are very geographically specific, you wouldn’t expect to see that kind of spread,” said Mr Doak. “These three hashtags have been heavily boosted by users in the US and UK. Taken together, UK and US users accounted for more use of the hashtags than Ireland.” Other countries that saw use of the phrases on a much smaller scale include India, Nigeria and Spain.
(tags: ireland politics far-right agitation racism fascism trolls twitter facebook tiktok instagram)
-
Looks like this is the new home for Radek Toma’s Smart Plan Calculator app, which allows Irish electricity users with a smart meter to upload their meter’s HDF data file and receive recommendations for which available plans will give them optimal rates.
(tags: analysis electricity ireland smart-meters home esb power hdf open-data)
The Not So Hidden Israeli Politics of ‘The Last of Us Part II’
This is actually really quite insightful — and explains why it was such a painful, and ultimately unenjoyable, game to play.
The Last of Us Part II focuses on what has been broadly defined by some of its creators as a “cycle of violence.” While some zombie fiction shows human depravity in response to fear or scarcity in the immediate aftermath of an outbreak, The Last of Us Part II takes place in a more stabilized post apocalypse, decades after societal collapse, where individuals and communities choose to hurt each other as opposed to taking heinous actions out of desperation. More specifically, the cycle of violence in The Last of Us Part II appears to be largely modeled after the Israeli-Palestinian conflict. I suspect that some players, if they consciously clock the parallels at all, will think The Last of Us Part II is taking a balanced and fair perspective on that conflict, humanizing and exposing flaws in both sides of its in-game analogues. But as someone who grew up in Israel, I recognized a familiar, firmly Israeli way of seeing and explaining the conflict which tries to appear evenhanded and even enlightened, but in practice marginalizes Palestinian experience in a manner that perpetuates a horrific status quo.
(via Alex)(tags: vice commentary ethics games hate politics the-last-of-us israel palestine fiction via:alex)