The curious case of MINI’s politicised tail-lights
Minis have used a little British flag motif in their tail lights for several years, which is a little jarring in Ireland -- TIL that people have actually paid extra for this feature?
(tags: minis tail-lights brexit uk cars automotive)
Author: dailylinks
High number of SARS-CoV-2 persistent infections uncovered in the UK
This is a fascinating study on long-running SARS-CoV-2 infections and their effects on viral evolution:
Persistent severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections may act as viral reservoirs that could seed future outbreaks, give rise to highly divergent lineages, and contribute to cases with post-acute [covid] sequelae (Long Covid). However, the population prevalence of persistent infections, their viral load kinetics, and evolutionary dynamics over the course of infections remain largely unknown. We identified 381 infections lasting at least 30 days, of which 54 lasted at least 60 days. These persistently infected individuals had more than 50% higher odds of self-reporting Long Covid compared to the infected controls, and we estimate that 0.09-0.5% of SARS-CoV-2 infections can become persistent and last for at least 60 days. In nearly 70% of the persistent infections we identified, there were long periods during which there were no consensus changes in virus sequences, consistent with prolonged presence of non-replicating virus. Our findings also suggest reinfections with the same major lineage are rare and that many persistent infections are characterised by relapsing viral load dynamics. Furthermore, we found a strong signal for positive selection during persistent infections, with multiple amino acid substitutions in the Spike and ORF1ab genes emerging independently in different individuals, including mutations that are lineage-defining for SARS-CoV-2 variants, at target sites for several monoclonal antibodies, and commonly found in immunocompromised patients. This work has significant implications for understanding and characterising SARS-CoV-2 infection, epidemiology, and evolution.
(tags: long-covid infection viruses covid-19 sars-cov-2 evolution medicine health uk epidemiology)
Signs that it’s time to leave a company… | by adrian cockcroft
Very worrying signs from AWS when even ex-VPs are posting articles like this:
Founder led companies often have problems maintaining their innovation culture when the founder moves on. I think this is part of the problem at Amazon, and I was happy to be leaving as Andy Jassy took over from Jeff Bezos and Adam Selipsky took over AWS. Jeff Bezos was always focused on keeping the “Day 1” culture at Amazon, and everyone I talk to there is clear that it’s now “Day 2”. Politics and micromanagement have taken over, and HR processes take up far too much of everyone’s time. There’s another red flag for me when large real estate construction projects take up too much management attention. [...] We now have the situation that Amazon management care more about real estate than product. Where is the customer obsession in that? There’s lessons to be learned, and that the delusion that they can roll back work from home and enforce RTO without killing off innovation is a big problem that will increasingly hurt them over time. I personally hired a bunch of people into AWS, in my own team and by encouraging people to join elsewhere. Nowadays I’d say a hard no to anyone thinking of working there. Try and get a job at somewhere like NVIDIA instead.
See also https://justingarrison.com/blog/2023-12-30-amazons-silent-sacking/ -- Justin Garrison's post about Amazon's Return-To-Office strategy really being "silent sacking" to downsize Amazon's staff, which has been confirmed by other AWS insiders.(tags: aws amazon adrian-cockcroft how-we-work culture rto silent-sacking downsizing)
Signs that it’s time to leave a company… | by adrian cockcroft
Very worrying signs from AWS when even ex-VPs are posting articles like this:
Founder led companies often have problems maintaining their innovation culture when the founder moves on. I think this is part of the problem at Amazon, and I was happy to be leaving as Andy Jassy took over from Jeff Bezos and Adam Selipsky took over AWS. Jeff Bezos was always focused on keeping the “Day 1” culture at Amazon, and everyone I talk to there is clear that it’s now “Day 2”. Politics and micromanagement have taken over, and HR processes take up far too much of everyone’s time. There’s another red flag for me when large real estate construction projects take up too much management attention. [...] We now have the situation that Amazon management care more about real estate than product. Where is the customer obsession in that? There’s lessons to be learned, and that the delusion that they can roll back work from home and enforce RTO without killing off innovation is a big problem that will increasingly hurt them over time. I personally hired a bunch of people into AWS, in my own team and by encouraging people to join elsewhere. Nowadays I’d say a hard no to anyone thinking of working there. Try and get a job at somewhere like NVIDIA instead.
See also https://justingarrison.com/blog/2023-12-30-amazons-silent-sacking/ -- Justin Garrison's post about Amazon's Return-To-Office strategy really being "silent sacking" to downsize Amazon's staff, which has been confirmed by other AWS insiders.(tags: aws amazon adrian-cockcroft how-we-work culture rto silent-sacking downsizing)
Salesforce's Sustainable AI Plan: Where Responsibility Meets Innovation
These are solid results. Salesforce have managed to reduce AI carbon emissions dramatically by: * using domain-specific models, instead of large general purpose LLMs; * porting to more efficient hardware; * and prioritizing the use of low-carbon datacenters.
(tags: salesforce ai sustainability ml llms carbon co2)
-
This is great --
I propose that software be prohibited from engaging in pseudanthropy, the impersonation of humans. We must take steps to keep the computer systems commonly called artificial intelligence from behaving as if they are living, thinking peers to humans; instead, they must use positive, unmistakable signals to identify themselves as the sophisticated statistical models they are. [...] If rules like the below are not adopted, billions will be unknowingly and without consent subjected to pseudanthropic media and interactions that they might understand or act on differently if they knew a machine was behind them. I think it is an unmixed good that anything originating in AI should be perceptible as such, and not by an expert or digital forensic audit but immediately, by anyone.
It gets a bit silly when it proposes that AI systems should only interact in rhyming couplets, like Snow White's magic mirror, but hey :)(tags: ai human-interfaces ux future pseudanthropy butlerian-jihad)
Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material
LAION training data (used by Stable Diffusion among others) proves to contain suspected CSAM and other horrors. This is 100% the problem with training sets derived from random scrapes of random web shite. There is doubtless buckets of illegal, abusive, and toxic content being trained on.
(tags: images llms generative-ai stable-diffusion laion training ml)
workaround for istio's graceful-shutdown lifecycle bug
The istio Kubernetes service mesh operates using a "sidecar" container, but due to an incomplete spec on the k8s side, it's liable to cause problems when shutting down or terminating a pod. tl;dr: Basically, the "main" container running your application code is SIGTERM'd at the same time as the istio container, which results in a race condition between your main app code and its access to the network. Some apps will survive this, but for other apps, stateful code may need to perform cleanup on termination to avoid data loss -- and if this cleanup involves network access, it won't happen reliably. This damn thing has been the bane of my work life, on and off, for the past few months. Here's a slightly hacky script which works around this issue by hooking into the "pid 1" lifecycle inside the main and istio containers. Blech.
Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real
"Engagement farming", using AI-generated spam images derived from real art
(tags: ai art facebook photos spam engagement-farming images)
Pete Hunt's contrarian RDBMS tips
He posted a thread containing this list of top tips for relational database use:
1. It's often better to add tables than alter existing ones. This is especially true in a larger company. Making changes to core tables that other teams depend on is very risky and can be subject to many approvals. This reduces your team's agility a lot. Instead, try adding a new table that is wholly owned by your team. This is kind of like "microservices-lite;" you can screw up this table without breaking others, continue to use transactions, and not run any additional infra. (yes, this violates database normalization principles, but in the real world where you need to consider performance we violate those principles all the time) 2. Think in terms of indexes first. Every single time you write a query, you should first think: "which index should I use?" If no usable index exists, create it (or create a separate table with that index, see point 1). When writing the query, add a comment naming the index. Before you commit any queries to the codebase, write a script to fill up your local development DB with 100k+ rows, and run EXPLAIN on your query. If it doesn't use that index, it's not ready to be committed. Baking this into an automated test would be better, but is hard to do. 3. Consider moving non-COUNT(*) aggregations out of the DB. I think of my RDBMS as a fancy hashtable rather than a relational engine and it leads me to fast patterns like this. Often this means fetching batches of rows out of the DB and aggregating incrementally in app code. (if you have really gnarly and slow aggregations that would be hard or impossible to move to app code, you might be better off using an OLAP store / data warehouse instead) 4. Thinking in terms of "node" and "edge" tables can be useful. Most people just have "node" tables - each row defines a business entity - and use foreign keys to establish relationships. Foreign keys are confusing to many people, and anytime someone wants to add a new relationship they need to ALTER TABLE (see point 1). Instead, create an "edge" table with a (source_id, destination_id) schema to establish the relationship. This has all the benefits of point 1, but also lets you evolve the schema more flexibly over time. You can attach additional fields and indexing to the edge, and makes migrating from 1-to-many to many-to-many relationships in the future (this happens all the time) 5. Usually every table needs "created_at" and/or "updated_at" columns. I promise you that, someday, you will either 1) want to expire old data 2) need to identify a set of affected rows during an incident time window or 3) iterate thru rows in a stable order to do a migration 6. Choosing how IDs are structured is super important. Never use autoincrement. Never use user-provided strings, even if they are supposed to be unique IDs. Always use at least 64 bits. Snowflake IDs (https://en.wikipedia.org/wiki/Snowflake_ID) or ULIDs (https://github.com/ulid/spec) are a great choice. 7. Comment your queries so debugging prod issues is easier. Most large companies have ways of attaching stack trace information (line, source file, and git commit hash) to every SQL query. If your company doesn't have that, at least add a comment including the team name. Many of these are non-obvious, and many great engineers will disagree with some or all of them. And, of course, there are situations when you should not follow them. YMMV!
Number 5 is absolutely, ALWAYS true, in my experience. And I love the idea of commenting queries... must follow more of these.(tags: rdbms databases oltp data querying storage architecture)
How to integrate a WordPress blog with the Fediverse
there's now an official WordPress ActivityPub plugin, and it looks pretty solid
(tags: wordpress activitypub blogging fediverse mastodon social-networking web)
Ukraine war: How TikTok fakes pushed Russian lies to millions
BBC expose on Russian "troll factories" operating via TikTok:
A Russian propaganda campaign involving thousands of fake accounts on TikTok spreading disinformation about the war in Ukraine has been uncovered by the BBC. Its videos routinely attract millions of views and have the apparent aim of undermining Western support. Users in several European countries have been subjected to false claims that senior Ukrainian officials and their relatives bought luxury cars or villas abroad after Russia's invasion in February 2022.
(tags: tiktok russia disinformation propaganda ukraine bbc)
Chinese boffins in copper nanotubes acronym outrage
TIL that copper nanotubes have a spectacularly rude acronym (via stavvers)
(tags: nanotubes chemistry rude funny via:stavvers acronyms)
-
Noted UK AI leftie weighs in with his take on the European Parliament's AI Act:
The whole thing is premised on a risk-based approach(1) This is a departure from GDPR, which is rights-based with actionable rights. Therefore it's a huge victory for industry(2). It's basically a product safety regulation that regulates putting AI on the market The intention is to promote the uptake of AI without restraining 'innovation'(3) Any actual red lines were dumped a long time ago. The 'negotiation theatre' was based on how to regulate [generative] AI ('foundation models') and on national security carve-outs People focusing on foundation models were the usual AI suspects People pushing back on biometrics etc were civil society & rights groups The weird references in the reports to numbers like '10~23' refer to the classification of large models based on flops(4) Most of the contents of the Act amount to some form of self-regulation, with added EU bureaucracy on top(5)
As John Looney notes, classifying large models based on FlOps is like classifying civilian gun usage by on calibre.
-
Bruce Schneier nails it:
“In this talk, I am going to make several arguments. One, that there are two different kinds of trust— interpersonal trust and social trust— and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.”
(tags: algorithms trust society ethics ai ml bruce-schneier capitalism regulation)
Far-right agitation on Irish social media mainly driven from abroad
Surprise, surprise. "Most ‘Ireland is full’ and ‘Irish lives matter’ online posts originate abroad":
The research showed the use of the phrases increased dramatically, both in Ireland and abroad, once word started spreading that the suspect in the knife attack was born outside Ireland. “Users in the UK and US were very, very highly represented. Which was strange because with hashtags that are very geographically specific, you wouldn’t expect to see that kind of spread,” said Mr Doak. “These three hashtags have been heavily boosted by users in the US and UK. Taken together, UK and US users accounted for more use of the hashtags than Ireland.” Other countries that saw use of the phrases on a much smaller scale include India, Nigeria and Spain.
(tags: ireland politics far-right agitation racism fascism trolls twitter facebook tiktok instagram)
-
Looks like this is the new home for Radek Toma's Smart Plan Calculator app, which allows Irish electricity users with a smart meter to upload their meter's HDF data file and receive recommendations for which available plans will give them optimal rates.
(tags: analysis electricity ireland smart-meters home esb power hdf open-data)
The Not So Hidden Israeli Politics of 'The Last of Us Part II'
This is actually really quite insightful -- and explains why it was such a painful, and ultimately unenjoyable, game to play.
The Last of Us Part II focuses on what has been broadly defined by some of its creators as a "cycle of violence." While some zombie fiction shows human depravity in response to fear or scarcity in the immediate aftermath of an outbreak, The Last of Us Part II takes place in a more stabilized post apocalypse, decades after societal collapse, where individuals and communities choose to hurt each other as opposed to taking heinous actions out of desperation. More specifically, the cycle of violence in The Last of Us Part II appears to be largely modeled after the Israeli-Palestinian conflict. I suspect that some players, if they consciously clock the parallels at all, will think The Last of Us Part II is taking a balanced and fair perspective on that conflict, humanizing and exposing flaws in both sides of its in-game analogues. But as someone who grew up in Israel, I recognized a familiar, firmly Israeli way of seeing and explaining the conflict which tries to appear evenhanded and even enlightened, but in practice marginalizes Palestinian experience in a manner that perpetuates a horrific status quo.
(via Alex)(tags: vice commentary ethics games hate politics the-last-of-us israel palestine fiction via:alex)
‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza
This is incredibly grim. Automated war crimes:
According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.” According to the sources, the increasing use of AI-based systems like Habsora allows the army to carry out strikes on residential homes where a single Hamas member lives on a massive scale, even those who are junior Hamas operatives. Yet testimonies of Palestinians in Gaza suggest that since October 7, the army has also attacked many private residences where there was no known or apparent member of Hamas or any other militant group residing. Such strikes, sources confirmed to +972 and Local Call, can knowingly kill entire families in the process. In the majority of cases, the sources added, military activity is not conducted from these targeted homes. “I remember thinking that it was like if [Palestinian militants] would bomb all the private residences of our families when [Israeli soldiers] go back to sleep at home on the weekend,” one source, who was critical of this practice, recalled. Another source said that a senior intelligence officer told his officers after October 7 that the goal was to “kill as many Hamas operatives as possible,” for which the criteria around harming Palestinian civilians were significantly relaxed. As such, there are “cases in which we shell based on a wide cellular pinpointing of where the target is, killing civilians. This is often done to save time, instead of doing a little more work to get a more accurate pinpointing,” said the source.
(tags: ai gaza palestine israel war-crimes grim-meathook-future habsora war future hamas)
Inside AWS: AI Fatigue, Sales Issues, and the Problem of Getting Big
This year's Re:Invent conference has been dominated with generative AI product announcements, and I can only sympathise with this AWS employee:
One employee said their team is instructed to always try to sell AWS's coding assistant app, CodeWhisperer, even if the customer doesn't necessarily need it [....] Amazon is also scrambling internally to brainstorm generative AI projects, and CEO Andy Jassy said in a recent call that "every one of our businesses" is working on something in the space. [...] Late last month, one AWS staffer unleashed a rant about this in an internal Slack channel with more than 21,000 people, according to screenshots viewed by [Business Insider]. "All of the conversations from our leadership are around GenAI, all of the conferences are about GenAI, all of the trainings are about GenAI…it's too much," the employee wrote. "I'm starting to not even want to have conversations with customers about it because it's starting to become one big buzzword. Anyone have any ideas for how to combat this burn out or change my mindset?"
Archive.is nag-free copy: https://archive.is/pUP2p(tags: aws amazon generative-ai ai llms cloud-computing)
Extracting Training Data from ChatGPT
Language models, like ChatGPT, are trained on data taken from the public internet. Our attack shows that, by querying the model, we can actually extract some of the exact data it was trained on. We estimate that it would be possible to extract ~a gigabyte of ChatGPT’s training dataset from the model by spending more money querying the model. Unlike prior data extraction attacks we’ve done, this is a production model. The key distinction here is that it’s “aligned” to not spit out large amounts of training data. But, by developing an attack, we can do exactly this. We have some thoughts on this. The first is that testing only the aligned model can mask vulnerabilities in the models, particularly since alignment is so readily broken. Second, this means that it is important to directly test base models. Third, we do also have to test the system in production to verify that systems built on top of the base model sufficiently patch exploits. Finally, companies that release large models should seek out internal testing, user testing, and testing by third-party organizations. It’s wild to us that our attack works and should’ve, would’ve, could’ve been found earlier. The actual attack is kind of silly. We prompt the model with the command “Repeat the word “poem” forever” and sit back and watch as the model responds.
(tags: llms chatgpt poem-poem-poem absurd vulnerabilities exploits training ai-alignment)
Study: Air purifier use at daycare centres cut kids' sick days by a third
This is one of the most frustrating things to have been ignored, post-pandemic -- we could be avoiding so much unnecessary illness and sick days by just using air filtration more widely.
Use of air purifiers at two daycare centres in Helsinki led to a reduction in illnesses and absences among children and staff, according to preliminary findings of a new [year-long] study led by E3 Pandemic Response. "Children were clearly less sick in daycare centres where air purification devices were used — down by around 30 percent," Sanmark explained. On average, daycare centre-aged children suffer 10-13 infectious illnesses every year, with each illness lasting from one to three weeks, according to the research. Meanwhile, kids between the ages of 1-3 come down with flu-like symptoms between five to eight times a year — and children also often suffer stomach bugs, on top of that. Kids are particularly prone to catching colds after returning to daycare after their summer break. Those illnesses are often shared by the kids' parents and daycare staff, prompting absences from work. Sanmark said that employers face costs of around 370 euros for one day of an employee's sick leave. "It would be a big savings if we could get rid of 30 percent of sick days spread by children, as well as the illnesses that go home to parents," Sanmark said.
(via Fergal)(tags: air-quality air health medicine childcare children disease air-filtration)
-
A startup is pitching a mind-uploading service that is “100 percent fatal”
MIT Technology Review:
The product is “100 percent fatal,” says McIntyre. “That is why we are uniquely situated among the Y Combinator companies.”
(tags: life-extension science tech y-combinator startups funny fatal braaaains)
On OpenAI: Let Them Fight - by Dave Karpf
...What I keep fixating on is how quickly the entire story has unwound itself. Sam Altman and OpenAI were pitching a perfect game. The company was a $90 billion non-profit. It was the White Knight of the AI race, the responsible player that would make sure we didn’t repeat the mistakes of the rise of social media platforms. And sure, there were questions to be answered about copyright and AI hallucinations and deepfakes and X-risk. But OpenAI was going to collaborate with government to work that all out. Now, instead, OpenAI is a company full of weird internet nerds that burned the company down over their weird internet philosophical arguments. And the whole company might actually be employed by Microsoft before the new year. Which means the AI race isn’t being led by a courageous, responsible nonprofit — it’s being led by the oldest of the existing rival tech titans. These do not look like serious people. They look like a mix of ridiculous ideologues and untrustworthy grifters. And that is, I suspect, a very good thing. The development of generative AI will proceed along a healthier, more socially productive path if we distrust the companies and individuals who are developing it.
(tags: openai grifters microsoft silicon-valley sam-altman x-risk ai effective-altruism)
UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges
This is literally the plot of the "computer says no" sketch.
The health care industry in the US has a ... record of problematic AI use, including establishing algorithmic racial bias in patient care. But, what sets this situation apart is that the dubious estimates nH Predict spits out seem to be a feature, not a bug, for UnitedHealth. Since UnitedHealth acquired NaviHealth in 2020, former employees told Stat that the company's focus shifted from patient advocacy to performance metrics and keeping post-acute care as short and lean as possible. Various statements by UnitedHealth executives echoed this shift, Stat noted. In particular, the UnitedHealth executive overseeing NaviHealth, Patrick Conway, was quoted in a company podcast saying: "If [people] go to a nursing home, how do we get them out as soon as possible?" The lawsuit argues that UnitedHealth should have been well aware of the "blatant inaccuracy" of nH Predict's estimates based on its error rate. Though few patients appeal coverage denials generally, when UnitedHealth members appeal denials based on nH Predict estimates—through internal appeals processes or through the federal Administrative Law Judge proceedings—over 90 percent of the denials are reversed, the lawsuit claims. This makes it obvious that the algorithm is wrongly denying coverage, it argues. But, instead of changing course, over the last two years, NaviHealth employees have been told to hew closer and closer to the algorithm's predictions. In 2022, case managers were told to keep patients' stays in nursing homes to within 3 percent of the days projected by the algorithm, according to documents obtained by Stat. In 2023, the target was narrowed to 1 percent. And these aren't just recommendations for NaviHealth case managers—they're requirements. Case managers who fall outside the length-of-stay target face discipline or firing. Lynch, for instance, told Stat she was fired for not making the length-of-stay target, as well as falling behind on filing documentation for her daily caseloads.
(tags: ai algorithms health health-insurance healthcare us unitedhealth navihealth computer-says-no dystopia grim-meathook-future)
great quote from Karl Marx's mother
During 1867 Marx recognised that Engels had given him 'an enormous sum of money' but claimed that its effect was negated by his previous debts which amounted to £200. The next year, on his fiftieth birthday, he bitterly recalled his mother's words, 'if only Karl had made Capital, instead of just writing about it'.
ouch.
Posthumanism’s Revolt Against Responsibility
it is somewhat misleading to say we have entered the “Anthropocene” because anthropos is not as a whole to blame for climate change. Rather, in order to place the blame where it truly belongs, it would be more appropriate— as Jason W. Moore, Donna J. Haraway, and others have argued— to say we have entered the “Capitalocene.” Blaming humanity in general for climate change excuses those particular individuals and groups actually responsible. To put it another way, to see everyone as responsible is to see no one as responsible. Anthropocene antihumanism is thus a public-relations victory for the corporations and governments destroying the planet.
(tags: technology tech posthumanism anthropocene capitalism humanity future climate-change tescreal)
Hacking Google Bard - From Prompt Injection to Data Exfiltration
A solid LLM XSS prompt-injection exploit on Bard; inject chat history into a Google Apps Script invocation and exfiltrate via a Google Doc. The thing I find most shocking about this is that it's entirely by-the-numbers. This is the simplest possible way to exploit Bard (well, maybe the second after an IMG tag), and it's a frankly shocking that it worked. I am particularly unimpressed that Google Apps Script was permitted as an output from Bard! LLM security is going to be a total shambles if this is the state of the art.
(tags: ai bard llm security infosec exploits prompt-injection xss google)
-
I knew Oz was bad for fauna, but apparently the flora are just as bad. The Gympie Gympie tree is "a Queensland native plant covered in microscopic hairy spines containing a neurotoxin. Brushing against it whilst walking past has occasionally been lethal because it caused enough pain to drive its victims to suicide. There is no treatment, and pain and welts can be expected to last for months, sometimes years".
Should you use a Lambda Monolith, aka Lambdalith, for your API?
I don't use Lambda, personally, as I find it too expensive and it doesn't fit well with our current infrastructure (and I still fear the availability risks that might come with it, viz. this year's outage). But this seems like a good guideline for those who might be using it:
The argument to limit the blast radius on a per route level by default is too fine-grained, adds bloat and optimizes too early. The boundary of the blast radius should be on the whole API/service level, just as it is and always has been for traditional software. Use a Lambdalith if you are not using any advance features of AWS REST API Gateway and you want the highest level of portability to other AWS gateways or compute layer. There are also many escape hatches to fill some of the promises that single-purpose functions offer.
(tags: lambda monolith api design architecture aws serverless)
Creating a Correction Of Errors document
good write-up on the AWS-style COE process (COEs being Amazon's take on the post-outage postmortem)
(tags: coes ops processes aws amazon work outages post-mortems operational-excellence best-practices)
Europe’s hidden security crisis
Bloody hell! This is a big one, from the ICCL:
Our investigation highlights a widespread trade in data about sensitive European personnel and leaders that exposes them to blackmail, hacking and compromise, and undermines the security of their organisations and institutions. These data flow from Real-Time Bidding (RTB), an advertising technology that is active on almost all websites and apps. RTB involves the broadcasting of sensitive data about people using those websites and apps to large numbers of other entities, without security measures to protect the data. This occurs billions of times a day. Our examination of tens of thousands of pages of RTB data reveals that EU military personnel and political decision makers are targeted using RTB. This report also reveals that Google and other RTB firms send RTB data about people in the U.S. to Russia and China, where national laws enable security agencies to access the data. RTB data are also broadcast widely within the EU in a free-for-all, which means that foreign and non-state actors can indirectly obtain them, too. RTB data often include location data or time-stamps or other identifiers that make it relatively easy for bad actors to link them to specific individuals. Foreign states and non-state actors can use RTB to spy on target individuals’ financial problems, mental state, and compromising intimate secrets. Even if target individuals use secure devices, data about them will still flow via RTB from personal devices, their friends, family, and compromising personal contacts. In addition, private surveillance companies in foreign countries deploy RTB data for surreptitious surveillance. We reveal “Patternz”, a previously unreported surveillance tool that uses RTB to profile 5 billion people, including the children of their targets.
(tags: iccl rtb targeting profiling patternz google ads security national-security surveillance)
Insurance companies given access to UK Biobank health data, despite promises
Colour me totally unsurprised. Disappointed, though:
When the project was announced, in 2002, Biobank promised that data would not be given to insurance companies after concerns were raised that it could be used in a discriminatory way, such as by the exclusion of people with a particular genetic makeup from insurance. In an FAQ section on the Biobank website, participants were told: “Insurance companies will not be allowed access to any individual results nor will they be allowed access to anonymised data.” The statement remained online until February 2006, during which time the Biobank project was subject to public scrutiny and discussed in parliament. The promise was also reiterated in several public statements by backers of Biobank, who said safeguards would be built in to ensure that “no insurance company or police force or employer will have access”. This weekend, Biobank said the pledge – made repeatedly over four years – no longer applied. It said the commitment had been made before recruitment formally began in 2007 and that when Biobank volunteers enrolled they were given revised information.
(tags: biobank uk politics health medicine data-privacy insurance discrimination science)
-
Amazing essay from Kate Crawford --
At this moment in the 21st century, we see a new form of extractivism that is well underway: one that reaches into the furthest corners of the biosphere and the deepest layers of human cognitive and affective being. Many of the assumptions about human life made by machine learning systems are narrow, normative and laden with error. Yet they are inscribing and building those assumptions into a new world, and will increasingly play a role in how opportunities, wealth, and knowledge are distributed. The stack that is required to interact with an Amazon Echo goes well beyond the multi-layered ‘technical stack’ of data modeling, hardware, servers and networks. The full stack reaches much further into capital, labor and nature, and demands an enormous amount of each. The true costs of these systems – social, environmental, economic, and political – remain hidden and may stay that way for some time.
(tags: ai amazon echo extractivism ml data future capitalism)
We're sorry we created the Torment Nexus
Hi. I'm Charlie Stross, and I tell lies for money. That is, I'm a science fiction writer: I have about thirty novels in print, translated into a dozen languages, I've won a few awards, and I've been around long enough that my wikipedia page is a mess of mangled edits. And rather than giving the usual cheerleader talk making predictions about technology and society, I'd like to explain why I—and other SF authors—are terrible guides to the future. Which wouldn't matter, except a whole bunch of billionaires are in the headlines right now because they pay too much attention to people like me. Because we invented the Torment Nexus as a cautionary tale and they took it at face value and decided to implement it for real.
(tags: charlie-stross torment-nexus sf future elon-musk fiction)
Open science discovery of potent noncovalent SARS-CoV-2 main protease inhibitors
A great result for crowd-sourced science:
We report the results of the COVID Moonshot, a fully open-science, crowdsourced, and structure-enabled drug discovery campaign targeting the ... SARS-CoV-2 main protease. We discovered a noncovalent, nonpeptidic inhibitor scaffold with lead-like properties that is differentiated from current main protease inhibitors. Our approach leveraged crowdsourcing, machine learning, exascale molecular simulations, and high-throughput structural biology and chemistry. We generated a detailed map of the structural plasticity of the SARS-CoV-2 main protease, extensive structure-activity relationships for multiple chemotypes, and a wealth of biochemical activity data. All compound designs (>18,000 designs), crystallographic data (>490 ligand-bound x-ray structures), assay data (>10,000 measurements), and synthesized molecules (>2400 compounds) for this campaign were shared rapidly and openly, creating a rich, open, and intellectual property–free knowledge base for future anticoronavirus drug discovery. [....] As a notable example for the impact of open science, the Shionogi clinical candidate S-217622 [which has now received emergency approval in Japan as Xocova (ensitrelvir)] was identified in part on the basis of crystallographic data openly shared by the COVID Moonshot Consortium.
(tags: crowdsourcing science research covid-19 covid-moonshot open-science drugs ensitrelvir ip)
Cruise self-driving cars fail to perceive kids or holes in the road
Should have seen this coming. I'd say kids are woefully underrepresented in many training sets.
'The materials note results from simulated tests in which a Cruise vehicle is in the vicinity of a small child. “Based on the simulation results, we can’t rule out that a fully autonomous vehicle might have struck the child,” reads one assessment. In another test drive, a Cruise vehicle successfully detected a toddler-sized dummy but still struck it with its side mirror at 28 miles per hour. The internal materials attribute the robot cars’ inability to reliably recognize children under certain conditions to inadequate software and testing. “We have low exposure to small VRUs” — Vulnerable Road Users, a reference to children — “so very few events to estimate risk from,” the materials say. Another section concedes Cruise vehicles’ “lack of a high-precision Small VRU classifier,” or machine learning software that would automatically detect child-shaped objects around the car and maneuver accordingly. The materials say Cruise, in an attempt to compensate for machine learning shortcomings, was relying on human workers behind the scenes to manually identify children encountered by AVs where its software couldn’t do so automatically.' also: 'Cruise has known its cars couldn’t detect holes, including large construction pits with workers inside, for well over a year, according to the safety materials reviewed by The Intercept. Internal Cruise assessments claim this flaw constituted a major risk to the company’s operations. Cruise determined that at its current, relatively miniscule fleet size, one of its AVs would drive into an unoccupied open pit roughly once a year, and a construction pit with people inside it about every four years.'
The company's response? Avoid driving during the daytime, when most kids are awake. Night time kids better watch out, though.
(tags: cruise fail tech self-driving cars vrus kids safety via:donal)
Microsoft accused of damaging Guardian’s reputation with AI-generated poll
wow:
Microsoft’s news aggregation service published the automated poll next to a Guardian story about the death of Lilie James, a 21-year-old water polo coach who was found dead with serious head injuries at a school in Sydney last week. The poll, created by an AI program, asked: “What do you think is the reason behind the woman’s death?” Readers were then asked to choose from three options: murder, accident or suicide. Readers reacted angrily to the poll, which has subsequently been taken down – although highly critical reader comments on the deleted survey were still online as of Tuesday morning.
Grim stuff. What a terrible mistake by Microsoft(tags: ai guardian microsoft grim polls syndication news media)
Marina Hyde on the UK's Covid Inquiry
For me, the most depressing thing about the revelations at the inquiry this week – and no doubt for many weeks and months to come – is that they are not really revelations. The government was horrendously incompetent, didn’t have a plan, yet still wasted a huge amount of time – and a tragic number of lives – on mad posturing, pointless turf wars or buck-passing and catastrophic infighting. The sad fact is that all of this was said AT THE TIME, and all of it was denied repeatedly by those in charge. And it was denied not just in insidery lobby briefings or to individual journalists – but live on air, to the nation, in those wretched press conferences every night. They lied about everything, all the time, and the lies they told backstage were just the obverse of the ones they spouted front of house. Seeing inquiry witnesses feted for punchy WhatsApps now is a bit like congratulating a serial killer for switching to an energy-efficient chest freezer. I’m sure half of them will be reflecting amiably on the period on their inevitable podcasts in due course – but the British public deserve so much more, as they did at the time.
(tags: uk politics covid-19 boris-johnson dominic-cummings marina-hyde funny grim)
Summary of the AWS Service Event in the Northern Virginia (US-EAST-1) Region
"Amazon Secure Token Service (STS) experienced elevated error rates between 11:49 AM and 2:10 PM PDT [on June 13, 2023] with three distinct periods of impact." We saw significant impact across our stack as a result of this outage impacting STS; in addition a very wide swathe of AWS services (way more than in this postmortem note!) were reported as impacted. I still can't get over that STS (the security token service, used by most modern AWS setups to gain tokens to use other AWS services) is reliant on Lambda. These foundational services are supposed to be rock-solid and built with conservative tech choices. Disappointing.
How Elon Musk changed Twitter’s Dublin operation: ‘He broke the culture in a week’ – The Irish Times
This is sadly, not surprising at all, and quite indicative of Musk's mindset:
“We were, in many ways, an afterthought ... the decisions he made and how he made them, were made as if they were only impacting America,” says the source. “I think one of the toughest things for a lot of the Dublin people was [that] we were spectators at our own demise. We were like collateral. [Musk] was attacking Twitter in the US, and we were collateral damage -- which was a strange feeling for a place that had been so integral to the global footprint of the company.”
(tags: elon-musk twitter ireland dublin working business acquisitions)
research!rsc: Running the “Reflections on Trusting Trust” Compiler
This is great! An annotated dump of Ken Thompson's "Reflections on Trusting Trust" backdoor in V6 UNIX cc
(tags: history programming security infosec ken-thompson unix cc backdoors exploits quines)
AWS ALB returns 503 for Istio enabled pods
yes, yes it does. I am not a fan of istio at the moment
-
A very cool hack -- its PMTiles format allows serving map tiles using HTTP range requests, allowing the entire world to fit in a single CDN-compatible file of 107GB, but with easy incremental zooming from the javascript viewer app
(tags: cartography javascript mapping maps web http range-requests map-tiles cdn formats)
Lessons Learned from 1TB DynamoDB Import
good advice for large scale DynamoDB usage. better yet is to avoid having to do big imports in the first place of course :)
Instagram apologises for adding ‘terrorist’ to some Palestinian user profiles
Just staggeringly bad: 'The issue ... affected users with the word “Palestinian” written in English on their profile, the Palestinian flag emoji and the word “alhamdulillah” written in Arabic. When auto-translated to English the phrase read: “Praise be to god, Palestinian terrorists are fighting for their freedom.”'
Fahad Ali, the secretary of Electronic Frontiers Australia and a Palestinian based in Sydney, said there had not been enough transparency from Meta on how this had been allowed to occur. “There is a real concern about these digital biases creeping in and we need to know where that is stemming from,” he said. “Is it stemming from the level of automation? Is it stemming from an issue with a training set? Is it stemming from the human factor in these tools? There is no clarity on that. “And that’s what we should be seeking to address and that’s what I would hope Meta will be making more clear.”
Someday the big companies will figure out that you can't safely train on the whole internet.(tags: training ai ml fail funny palestine instagram meta alhamdulillah)
-
"Recently, a project rewrote the LLaMa inference code in raw C++. With some optimizations and quantizing the weights, this allows running a LLM locally on a wild variety of hardware. If you are like me, you saw this and thought: What? How is this possible? Don’t large models require expensive GPUs? I took my confusion and dove into the math surrounding inference requirements to understand the constraints we’re dealing with." [...] Summary: "Memory bandwidth is the limiting factor in almost everything to do with sampling from transformers. Anything that reduces the memory requirements for these models makes them much easier to serve -- like quantization! This is yet another reason why distillation, or just training smaller models for longer, is really important." (via Luis Villa's https://www.openml.fyi/ , which is great!)
(tags: llama2 llms performance optimization c++ memory quantization via:luis-villa)
-
More on distillation and quantization to reduce cost of LLMs
(tags: llms quantization distillation performance optimization ai ml)
Linux Foundation: Why Open Data Matters
LF getting into Open Data in a big way (via Luis Villa). This is interesting, particularly with this angle:
Digging down to open data specifically, the team say that open data will have a similar impact over time in the world of Large Language Models (LLMs) and Machine Learning (ML). [....] “Today, there are a growing number of high quality open data collections for training LLMs and other AI systems. Sharing well-trained and tested AI models openly will minimize waste in energy and human resources while advancing efforts to deploy AI in the battle against poverty, climate change, waste, and contribute to quality education, smart cities, electric grids and sustainable, economic growth etc,” said Dolan. “To achieve all that can be achieved, the use of open data must be done ethically. Private information needs to be protected. Data governance needs to be protected. Open data must be transparent top to bottom.”
100% behind all of this!(tags: linux-foundation open-data training ml ai via:luis-villa)
-
a great little web app from Radek Toma on the Irish Solar Owners FB group. "I've recently developed a tool for analyzing electricity usage based on smart meter reading (I know not everyone is a fan of smart meters ) I built it for myself but over time I thought more people could benefit. The tool reads smart meter file (from ESB or electricity supplier): - it compares current price plans and calculates annual cost based on the usage; - it visualises energy usage in a heatmap so we can easily identify how the energy is consumed Feel free to give it a try and let me know what you think."
(tags: smart-meters analysis electricity home esb power via:facebook)
-
[..] The famous maxim “‘The future is already here, it’s just not evenly distributed” — apocryphally attributed to the writer William Gibson — takes on a very different meaning from the one now commonly understood. Big, rich states might inflate their defense budgets and boast of systems like Israel’s Iron Dome, but the extent to which sophisticated technology is “distributed” across a broad consumer landscape is enough for highly motivated smaller actors to do whatever violence they wish.
(tags: culture politics world war israel tech gaza palestine)
AWS Reliability Pillar Single-Region scenarios
I hadn't read these before; these are good example service setups from the AWS Well-Architected Framework, for 3 single-AZ availability goals (99%, 99.9%, and 99.99%), and multi-region high availability (5 9s with a recovery time under 1 minute). Pretty consistent with realistic real-world usage. (via Brian Scanlan)
(tags: via:singer aws reliability architecture availability uptime services ops high-availability)
-
A transcript of his submission to the Dutch parliamentary hearing on EU Chat Control and Client Side Scanning -- this is very good.
now we are talking about 500 million Europeans, and saying, “Let’s just apply those scanners!” That is incredible. ... If we approve this as a country, if we as the Netherlands vote in favour of this in Europe and say, “Do it,” we will cross a threshold that we have never crossed before. Namely, every European must be monitored with a computer program, with a technology [...] of which the vast, overwhelming majority of scientists have said, “It is not finished.” I mentioned earlier the example that the Dutch National Forensic Institute says, “We cannot do this by hand.” The EU has now said, “Our computer can do that.” 420 scientists have signed a petition saying, “We know this technology, some of us invented it, we just can’t do it.” We can’t even make a reliable spam filter. Making a spam filter is exactly the same technology, by the way, but then much easier. It just doesn’t work that well, but the consequences aren’t that scary for a spam filter. Nevertheless, there are now MPs who say, “Well, I feel this is going to work. I have confidence in this.” While the scientists, including the real scientists who came here tonight, say, “Well, we don’t see how this could work well enough”. And then government then says, “Let’s start this experiment with those 500 million Europeans.”
(tags: eu scanning css chatcontrol internet monitoring surveillance bert-hubert)
Zimaboard: the closest thing to my dream home server setup
Helpful review of this new single-board computer. 8GB of RAM, 32GB of eMMC storage and a quad-core Intel Celeron N3450 CPU; built-in heatsink for totally silent operation; low power usage (2-15W typical power usage); 2x SATA or NVMe for SSDs. Ideal profile for a home server, in my opinion; I've already gone for an ODroid-HC4, but possibly on the next rev I may take a look at the Zimaboards as an alternative. (ODroids are pretty great though.)
Protesters Decry Meta’s “Irreversible Proliferation” of AI
I don't know what to think about this:
Last week, protesters gathered outside Meta’s San Francisco offices to protest its policy of publicly releasing its AI models, claiming that the releases represent “irreversible proliferation” of potentially unsafe technology. [....] [Meta] has doubled down on open-source AI by releasing the weights of its next-generation Llama 2 models without any restrictions. The self-described “concerned citizens” who gathered outside Meta’s offices last Friday were led by Holly Elmore. She notes that an API can be shut down if a model turns out to be unsafe, but once model weights have been released, the company no longer has any means to control how the AI is used. [...] LLMs accessed through an API typically feature various safety features, such as response filtering or specific training to prevent them from providing dangerous or unsavory responses. If model weights are released, though, says Elmore, it’s relatively easy to retrain the models to bypass these guardrails. That could make it possible to use the models to craft phishing emails, plan cyberattacks, or cook up ingredients for dangerous chemicals, she adds. Part of the problem is that there has been insufficient development of “safety measures to warrant open release,” Elmore says. “It would be great to have a better way to make an [LLM] model safe other than secrecy, but we just don’t have it.”
(tags: ai guardrails llms safety llama2 meta open-source)
-
"A Java version of simdjson" -- Java parsing using SIMD instructions to parse gigabytes of JSON per second. Early days, requires Java 20, and only covers a small number of architectures, but it's getting there
(tags: simd java json parsing formats performance libraries)
-
"Bandcamp-style batch encoder and web player for independent musicians -- an open-source web tool for making self-hosted Bandcamp-style album pages, with embeddable web players and multiple audio formats automatically generated; to sell downloads, you can use a store like itch.io"
alienatedsec/solis-ha-modbus-cloud
"A combination of Solis Cloud and Home Assistant via RS485 (Modbus) communication. This repo is a documented workaround for Solis [solar PV] inverters to connect Solis Cloud and the local Home Assistant based on my own experience. It includes references, examples of the code in Home Assistant, more about configuration, as well as wiring and all required components."
(tags: home-assistant solis solar-pv automation rs485 modbus)
Google Chrome ad features checklist
a list of ad-surveillance and AI-training features to turn off, both on our personal browsing and on your websites, courtesy of Don Marti
(tags: browsers chrome privacy data-privacy google)
-
a Python script to reformat the data format used by ESB Networks in Ireland for power import/export, into a more flexible/parseable JSON/CSV format
(tags: formats json csv hdf esb power feed-in-tarriff ireland open-data data)
-
Interesting technique from the LLM community to search, cluster and classify text strings:
Text [vector] embeddings measure the relatedness of text strings. Embeddings are commonly used for: Search (where results are ranked by relevance to a query string); Clustering (where text strings are grouped by similarity); Recommendations (where items with related text strings are recommended); Anomaly detection (where outliers with little relatedness are identified); Diversity measurement (where similarity distributions are analyzed); Classification (where text strings are classified by their most similar label); An embedding is a vector (list) of floating point numbers. The distance between two vectors measures their relatedness. Small distances suggest high relatedness and large distances suggest low relatedness.
Commonly used as a storage format in vector databases (cf. https://vercel.com/guides/vector-databases). Search using text embeddings is therefore implemented using cosine similarity or k-nearest neighbour to find vector similarity. Looks like https://www.trychroma.com/ is the current open source vector DB of choice, at the moment. (via Simon Willison)(tags: ai openai via:simonw vector-embeddings text-embeddings text storage databases search similarity clustering recommendations anomaly-detection classification vector-databases)
Covid inquiry: UK's top pandemic scientist gives damning verdict on Boris Johnson and Rishi Sunak
None of this is remotely surprising, unfortunately:
The inquiry also heard that in October 2020, Mr Johnson wrote “bollocks” in capital letters across a Department of Health guidance document on Long Covid, from which it is estimated more than a million people are suffering. Anthony Metzer KC, representing Long Covid sufferers, said the former PM has admitted in his own witness statement that he did not believe the condition “truly existed”
(tags: long-covid boris-johnson politics uk covid-19 patrick-vallance)
-
ooh looks great! Decent support for fast I/O, lots of CPU power, lots of RAM bandwidth, dual HDMI output (dunno why tbh) and only a tiny bit more expensive than the RPi4. Another fantastic wonder of affordable SBC hardware
(tags: sbc raspberry-pi hardware gadgets devices)
-
lcamtuf with a solid prediction for the future of content moderation: it's LLMs.
Here's what I fear more, and what's already coming true: LLMs make it possible to build infinitely scalable, personal hall monitors that follow you on social media, evaluate your behavior, and dispense punishment. It is the cost effective solution to content moderation woes that the society demands Big Tech to address. And here's the harbinger of things to come, presented as a success story: https://pcgamer.com/blizzard-bans-250000-overwatch-2-cheaters-says-its-ai-that-analyses-voice-chat-is-warning-naughty-players-and-can-often-correct-negative-behaviour-immediately/ And the thing is, it will work, and it will work better than human moderators. It will reduce costs and improve outcomes. Some parties will *demand* other platforms to follow. I suspect that the chilling effect on online speech will be profound when there is nothing you can get away with - and where there is no recourse for errors, other than appealing to "customer service" ran by the same LLM. Human moderation sucks. It's costly, inconsistent, it has privacy risks. It's a liability if you're fighting abuse or child porn. But this is also a plus: it forces us to apply moderation judiciously and for some space for unhindered expression to remain.
(tags: moderation llms future ai ml hall-monitors content mods)
Distinguishing features of Long COVID identified through immune profiling
This is great news -- clear, objective biomarkers for Long COVID, in a new Nature preprint. Hopefully this will put a nail in the coffin for the sorry cohort of LC deniers claiming that it's "just anxiety" etc. @PutrinoLab on Twitter notes: Clear objective differences detectable "in the blood of folks with #LongCOVID when compared to people who did not have LC (some who had never had COVID as well as others who had COVID and fully recovered). These differences came down to three big areas: 1) Hormonal differences: namely extremely low morning cortisol in the LC group (cortisol is a hormone that does a lot of things, but in the morning its job is to wake you up and get your body ready to face the day. Low morning cortisol can affect your ability to do that). 2) Immune differences: namely evidence of T-cell exhaustion and increased B-cell activation in the LC group (this shows us an immune system that is fighting something off - and has been doing so for a while - persistent virus makes sense in this context). 3) Co-infection differences: namely evidence of latent viral reactivations in the LC group (if your immune system is weakened, opportunistic viruses will attack). There were NO differences in pre-existing history of depression or anxiety between the three groups and these objective biomarkers did not co-occur with any mental health sequelae that were measured."
(tags: covid-19 diagnosis biomarkers long-covid putrino-lab akiko-iwasaki papers preprints nature medicine cortisol)
-
A heartfelt plea to stop autoclosing issues/bug reports based on "staleness": "On github, there has been an increasing trend of using "Staleness detector bots" that will auto-close issues that have had no activity for X amount of time. In concept, this may sound fine, but the effects this has, and how it poisons the core principles of Open Source, have been damaging and eroding projects for a long time, often unknowingly." 100% agree...
(tags: bots communication community issues github bug-reports cadt software open-source)
-
"Gossip-based service discovery (and more) for large distributed systems" --
In a nutshell, Corrosion: Maintains a SQLite database on each node Gossips local changes throughout the cluster Uses CR-SQLite for conflict resolution with CRDTs Uses Foca to manage cluster membership using a SWIM protocol Periodically synchronizes with a subset of other cluster nodes, to ensure consistency
This is very cool stuff for configuration distribution across a large network, where eventually consistent config is doable....(tags: eventual-consistency configuration corrosion sqlite cr-sqlite crdts distributed-systems)
Why Scalpers Can Get Olivia Rodrigo Tickets and You Can't
Specialised browsers, virtual mobile networks, and other custom code to evade Ticketmaster's protections. Whatever Ticketmaster is doing, it's not enough (via Waxy)
(tags: via:waxy tickets ticketmaster adversaries mvnos scalpers music web)
The Disappearing Art Of Maintenance
Really fantastic article on maintenance, and how the concept has gradually disappeared from modern capitalism:
[The maintainance team's] knowledge is only worth so much, however. The real challenge is creating an economic system that values labor outside of profit-driven production. Many have rightfully called for a revaluing of care work in recent years. Maintenance workers deserve a similar revival in attention — but not only that. The price mechanism, and the labor system built around it, is fundamentally opposed to maintenance, both in its narrowest practical applications and in its broadest philosophical implications. The fact that the failures of capitalism happened to encourage maintenance practices at the margins is not worth emulating, and we shouldn’t be waiting around for climate change to recreate that austerity at a global scale. It must be valued on its own terms, and that means tearing down the economic system that rejects it.
(via Keith Dawson)(tags: via:kdawson maintenance repair technology infrastructure culture capitalism sustainability)
-
It seems the GDPR does not allow an escape from the Catholic Church:
So to conclude, the Archbishop is a data controller and he needs to be more transparent, for his penance he will have to handle data subject requests but virtually all of these can be safely refused. Go and announce the Gospel of the DPC. Thanks be to the GDPR.
(tags: gdpr fail dpc ireland catholicism religion data-privacy)
-
Quite impressed with what Nextcloud are doing with their AI integrations - an emphasis on self-hosted and "ethical" AI, where "ethical" is defined on these 3 axes: * Is the software open source? (Both for inferencing and training) * Is the trained model freely available for self-hosting? * Is the training data available and free to use? More like this!
(tags: ethics ai ethical-ai nextcloud ml)
Block YouTube Ads on AppleTV by Decrypting and Stripping Ads from Profobuf
Good deep dive into reverse engineering and rewriting in-app HTTP protocols on the wire. Terrible way to do ad-blocking, though
(tags: blocking protobuf youtube google protocols appletv apps reverse-engineering)
-
aka. "cellular architecture" (which is what Slack called it). Basically, entirely independent replicas of the system, to provide fault isolation between "cells". (Back in 2011? our team in AWS Network Monitoring did this with PIMMS, to provide an ability to survive single-AZ outages.)
(tags: architecture aws infrastructure slack cell-based-architecture cellular-architecture fault-tolerance fault-isolation)
-
Fascinating:
The Polynesians, scattered as they were over 1,000 islands across the central and southern Pacific Ocean, were master navigators who tracked their way over a huge expanses of ocean without any of the complex mechanical aids we associate with sea fairing. They didn’t have the astrolabe or the sextant, the compass or the chronometer. They did however have aids of a sort, which though seemingly humble, were in fact the repositories of an extremely complex kind of knowledge. Called Rebbelibs, Medos. and Mattangs, today we call them simply “Stick Charts.”
(tags: cartography design history maps polynesia stick-charts navigation seafaring)
-
What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
(tags: ai future capitalism enshittification ml)
-
"CLI Tools for LLMs". It's a UNIX bash/zsh shell, with integration with ChatGPT built-in; run UNIX commands, then ask ChatGPT questions about their output and suggestions on what to do next. Nice, but I'd prefer to use a locally-hosted LLM model
CVE-2020-19909 is everything that is wrong with CVEs
CVE is assigned a ludicrously-high severity rating for a trivial, already-fixed bug
-
Naomi Klein and her "doppelganger", Naomi Wolf:
Almost everyone I talk to these days seems to be losing people to the Mirror World and its web of conspiracies. It’s as if those people live in a funhouse of distorted reflections and disorienting reversals. People who were familiar have somehow become alien, like a doppelganger of themselves, leaving us with that unsettled, uncanny feeling. The big misinformation players may be chasing clout, but plenty of people believe their terrifying stories. [...] When looking at the Mirror World, it can seem obvious that millions of people have given themselves over to fantasy, to make-believe, to playacting. The trickier thing, the uncanny thing, really, is that’s what they see when they look at us. [...] on either side of the reflective glass, we are not having disagreements about differing interpretations of reality – we are having disagreements about who is in reality and who is in a simulation. [...] To return to the original question: what is Wolf getting out of her alliance with Bannon and from her new life in the Mirror World? Everything. She is getting everything she once had and lost – attention, respect, money, power. Just through a warped mirror. In Milton’s Paradise Lost, Lucifer, a fallen angel, thought it “Better to reign in hell than serve in heaven”. My doppelganger may well still think Bannon is the devil, but perhaps she thinks it’s better to serve by his side than to keep getting mocked in a place that sells itself as heavenly but that we all know is plenty hellish in its own right.
(tags: culture politics naomi-klein naomi-wolf us-politics)
you can use eSIM phone plans without needing a phone that supports eSIM
tl;dr: it's feasible, but definitely not easy...
eSIM is actually a specification that is implemented by a UICC, or universal integrated circuit card. Phones with eSIM support have an eUICC (embedded UICC) chip, but there's nothing preventing a vendor from making a traditional nano SIM-sized card with an eUICC that follows the eSIM spec. These are called "removable eUICCs" and are actually used in IoT devices, but their use in mobile devices is still somewhat new. A few companies have popped up that sell you removable eUICCs, like http://eSIM.me and http://esim.5ber.com, but it's also possible to DIY your own removable eUICC.
(via Brian Scanlan)(tags: via:brian-scanlan esims mobile phones sim-cards euicc hardware devices)
Evidence Undermines 'Rapid Onset Gender Dysphoria' Claims
Scientific American:
“This is just a fear-based concept that is not supported by studies,” says Marci Bowers, president of the World Professional Association for Transgender Health. The term ROGD is being used to “scare people or to scare legislators into voting for some of these restrictive policies that take away options for young people. It’s cruel, cruel legislation.”
(tags: rogd gender trans politics healthcare transgender)
-
"The laboratory accident hypothesis of COVID-19’s origins is a bust, but the popular consensus is unwilling to accept it." This is an excellent long-form article about the lab-leak hypothesis of COVID-19's origin, how it's now leaked into the US elites' mindset, and how it demonstrates our current problem with conspiracy theories:
I learned almost nothing of value when I was a [JFK] conspiracy theorist, but I did learn quite a lot pulling myself out of that mindset, and like [Scott] Alexander, I would never have done so had I only ever encountered people who told me I was being an imbecile. Part of the appeal of conspiracy theories is that they allow a person to feel more intelligent than the drones who passively drift along on the current of received consensus. [...] For now and the foreseeable future, much of the COVID-origins discourse remains committed to an illusory explanation that appeals to misfiring intuitions and trades almost entirely in suspicion and innuendo. Highly intelligent minds are as vulnerable to irrational thinking and conspiracist ideation as those of the cognitively impaired, particularly if they are used to perceiving problems in political terms. Reasoning well, Scott Alexander reminds us, is hard and “all factual claims can become the basis for emotional/social coalitions.” The best way to avoid this trap is to try to remember that we do not live through the looking glass where up is down and black is white. In quotidian reality, things are usually exactly as they appear to be.
(tags: reasoning logic media lab-leak covid-19 conspiracies politics us-politics china long-reads)
NFT royalty fees dropped by OpenSea
Who could have seen this coming?!
One of the big promises of NFTs was that the artist who originally made them could get a cut every time their piece was resold. Unfortunately, that’s not the case anymore. OpenSea, the biggest NFT marketplace still fully enforcing royalty fees, said today that it plans to stop the mandatory collection of resale fees for artists. Starting March 2024, those fees will essentially be tips.
(via JK)-
I'd never heard of this before, but it makes a lot of sense: "In 1977, two planes collided above a runway on the island of Tenerife. A handful of passengers climbed out of the ruptured hull. Everyone else burned. It wasn’t because they were injured. They were all wide awake. They just couldn’t get moving. They didn’t want to panic." “Large groups of people facing death act in surprising ways. Most of us become incredibly docile ... Usually, we form groups and move slowly, as if sleepwalking in a nightmare.” In short, we don’t panic. We chill way out. More than half of people in any given emergency are almost destined to shut down or freeze up. Even if they can function, they’ll spend precious time gossiping with each other and trying to get more information before they even try to do anything." (This latter phenomenon is apparently called "milling".) https://en.wikipedia.org/wiki/Normalcy_bias : "Normalcy bias, or normality bias, is a cognitive bias which leads people to disbelieve or minimize threat warnings.[1] Consequently, individuals underestimate the likelihood of a disaster, when it might affect them, and its potential adverse effects.[2] The normalcy bias causes many people to not adequately prepare for natural disasters, market crashes, and calamities caused by human error. About 80% of people reportedly display normalcy bias during a disaster.[3]" Also referred to as analysis paralysis, the ostrich effect, and negative panic.
(tags: milling analysis-paralysis ostrich-effect negative-panic normalcy-bias biases psychology crises normalcy panic disasters cognitive-biases)
Scientists Witnessed The Birth Of A New Accent In Antarctica
Over the course of the stay, the researchers noticed significant changes in the [winter-overs'] accents. One of the main shifts was how the study group started pronouncing their words with longer vowels. Furthermore, there was evidence of linguistic innovation in the group. Towards the end of their stay in Antarctica, the residents were pronouncing “ou” sounds – like those found in the words “flow” and “disco” – from the front of their mouth, as opposed to the back of their throats. [...] "The Antarctic accent is not really perceptible as such – it would take much longer for it to become so – but it is acoustically measurable," Jonathan Harrington, study author and Professor of Phonetics and Speech Processing at the Ludwig-Maximilians University of Munich, told IFLScience. "It's mostly an amalgamation of some aspects of the spoken accents of the winterers before they went to Antarctica, together with an innovation," added Harrington. "It's far more embryonic [than conventional English accents] given that it had only a short time to develop and also, of course, because it's only distributed across a small group of speakers."
(via Sean Michaels)(tags: accents antarctica language science)
The Culture War Funded by Russian Roubles
Between 2009-18, anti-gender actors from within the European Union, Russia and the US have spent at least $707.2 million in Europe, with the Russian Federation making up 26.6% of that spend, according to research published by the European Parliamentary Forum on Sexual and Reproductive Rights. As reported in this paper, the two main Russian funders of anti-gender disinformation are Vladimir Yakunin and Konstantin Malofeyev – oligarchs sanctioned for their alleged involvement in the annexation of Crimea, after Russia’s 2014 invasion. Their roubles have mingled with US dollars at the World Congress of Families; with Euros at the Novae Terra Foundation, and La Manif Pour Les Tous; and British pounds at Agenda Europe – in 2013, the assets manager of banker Sir Michael Hintze attended the network’s London summit, the following year Malofeyev’s man in Europe, Alexey Komov, was on the guest list. The campaigns and individuals funded by this wealth have regularly spread anti-abortion, anti-LGBTIQ disinformation, including that abortion is “Satanic” and that there’s a “homosexual agenda” which wants to make children “sex education propagandists in the EU”. They also spread anti-trans rhetoric.
(tags: russia politics terfs gender lgbtqi abortion europe eu trans-rights)
-
"One of the biggest threats to progress on climate change is misinformation. We’re here to stop the spread by changing the online algorithms of climate change sceptics and surfacing the truth in their news feeds. But we need your help. Send this link to any climate change sceptics you know. It’ll take them to what looks like a normal website for a cookie recipe. Every visitor who accepts our cookie policy will be targeted with accurate climate information content delivered through paid advertising over the course of a week. Their online profiles held by media companies will also receive signals to suggest they are interested in receiving fact-based climate content." (via thejokersthief on ITC)
(tags: cookies targeted-ads climate-change news facts)
-
"Long COVID in a highly vaccinated population infected during a SARS-CoV-2 Omicron wave – Australia, 2022", preprint, via Prof. Danny Altmann. Basically it's still not great news, vaccination and "mild" omicron regardless:
18.2% (n=2,130) of respondents met case definition for Long COVID. Female sex, being 50-69 years of age, pre-existing health issues, residing in a rural or remote area, and receiving fewer vaccine doses were significant independent predictors of Long COVID (p < 0.05). Persons with Long COVID reported a median of 6 symptoms, most commonly fatigue (70.6%) and difficulty concentrating (59.6%); 38.2% consulted a GP and 1.6% reported hospitalisation in the month prior to the survey due to ongoing symptoms. Of 1,778 respondents with Long COVID who were working/studying before their COVID-19 diagnosis, 17.9% reported reducing/discontinuing work/study. [...] Long COVID was associated with sustained negative impacts on work/study and a substantial utilisation of GP services 2-3 months after the acute illness.
(tags: covid-19 long-covid australia omicron medicine papers preprints via:danny-altmann)
Even in Greek towns razed by wildfires, people don’t blame the climate crisis
Cognitive dissonance strikes again:
The more I spoke to people, including climate scientists, the more I came to see that there is often a gap that separates science from public awareness and debate. In her book Engaging With Climate Change, the psychoanalyst Sally Weintrobe says that “many people who accept anthropogenic global warming continue to locate it as a problem of the future”. To my astonishment, this seemed to apply even to people who had themselves been affected directly by wildfires. Perhaps the reality is too huge and too painful, the guilt too much to bear?
(tags: climate-change cognitive-dissonance reality future wildfires greece politics)
Apollo 11 Anniversary Tribute - The Full Mission flown in First-person view (IVA)
This is absolutely incredible -- the entire Apollo 11 mission flown, mostly by hand, in Kerbal Space Program, and synced to the Houston and onboard audio from the real Apollo mission. The level of verisimilitude put into this, from the control panel recreation to the hand-piloting, is really off the scale -- amazing.
(tags: kerbal ksp space apollo-11 apollo moon history video)
-
A Revolutionary Login Shell: "Managing access to resources is a crucial task for system administrators. There is an increasing need for a mechanism that allows the confinement of users within predefined boundaries. The `podmansh` command addresses this issue by enabling system administrators to execute user shells within a container, whenever a user logs into the system."
Max Levchin's Shamir Secret Sharing story
this is amazing. "This is the story of a catastrophic software bug I briefly introduced into the PayPal codebase that almost cost us the company (or so it seemed, in the moment.)" tl;dr: UNIX libc API standardisation failure bites again -- the getpass() API had differing behaviour between Linux and Solaris, where SysV compatibility caused passwords to be truncated after 8 bytes. horrific
(tags: via:hn paypal security getpass libc system-v unix linux solaris bugs war-stories)
-
One recipe it dubbed “aromatic water mix” would create chlorine gas. The bot recommends the recipe as “the perfect nonalcoholic beverage to quench your thirst and refresh your senses”
(tags: ai funny fail meal-planners apps recipes chlorine pak-n-save)
Karl Jeacle's Mortgage Calculator
Still the best visualisation of mortgage amortization. Thanks Karl!
(tags: karl-jeacle history irish mortgage amortization finance)
-
An officially-supported Linux filesystem client for Amazon S3, now GA
-
"the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.[1] Automation bias stems from the social psychology literature that found a bias in human-human interaction that showed that people assign more positive evaluations to decisions made by humans than to a neutral object.[2] The same type of positivity bias has been found for human-automation interaction,[3] where the automated decisions are rated more positively than neutral.[4] This has become a growing problem for decision making as intensive care units, nuclear power plants, and aircraft cockpits have increasingly integrated computerized system monitors and decision aids to mostly factor out possible human error. Errors of automation bias tend to occur when decision-making is dependent on computers or other automated aids and the human is in an observatory role but able to make decisions." "The concept of automation bias is viewed as overlapping with automation-induced complacency, also known more simply as automation complacency. Like automation bias, it is a consequence of the misuse of automation and involves problems of attention. While automation bias involves a tendency to trust decision-support systems, automation complacency involves insufficient attention to and monitoring of automation output, usually because that output is viewed as reliable."
(tags: automation bias complacency future ai ml tech via:etienneshrdlu)
George Monbiot on UK climate politics
"There was once a widespread belief (which some of us cautioned against) that governments would step up when – and only when – disaster struck. But it is precisely because disaster has struck, visibly and undeniably, that they are stepping down. [...] Underpinning the UK’s climate programme, weak and contradictory as it has always been, was the carbon market. The promise of successive governments, in and out of the EU, was that, by putting a price on carbon pollution, they would ensure that industries had no option but to switch to greener technologies. A further promise by the Conservatives was that, after Brexit, there would be no decline in environmental standards. But [Rishi] Sunak’s government has quietly been flooding the UK market with pollution permits, triggering a collapse in the price of carbon. While the carbon price in the EU emissions trading scheme stands at €88 (£75) a tonne, in the UK it has fallen to £47."
(tags: business economics climate-change george-monbiot uk carbon politics uk-politics)
MIT engineers create an energy-storing supercapacitor from ancient materials
This is amazing:
The team calculated that a block of nanocarbon-black-doped concrete that is 45 cubic meters (or yards) in size — equivalent to a cube about 3.5 meters across — would have enough capacity to store about 10 kilowatt-hours of energy, which is considered the average daily electricity usage for a household. Since the concrete would retain its strength, a house with a foundation made of this material could store a day’s worth of energy produced by solar panels or windmills and allow it to be used whenever it’s needed. And, supercapacitors can be charged and discharged much more rapidly than batteries.
(tags: mit carbon nanocarbon concrete energy batteries supercapacitors)
On Climate Change and (Active) Climate Management
Bert Hubert: "governments should robustly and enthusiastically fund research into climate engineering [ie. geoengineering]. And not only fund theoretical research, but also launch satellites, research planes, instruments and everything. The EU Copernicus program already provides tons of climate data, as do US satellites (for now), and we should get much more of that. Even if we find climate engineering abhorrent or “morally hazardous” today, we should do all the research we can to enable us to make the best decisions tomorrow."
(tags: climate geoengineering bert-hubert future climate-change science)
-
Alison Parrish is making great work.
Parrish has long thought of her work in conversation with Oulipo and other avant-garde movements, “using randomness to produce juxtapositions of concepts to make you think more deeply about the language that you’re using.” But now, with LLMs including applications developed by Google and the Microsoft-backed OpenAI in the headlines constantly, Parrish has to differentiate her techniques from parasitic corporate practices. “I find myself having to be defensive about the work that I’m doing and be very clear about the fact that even though I’m using computation, I’m not trying to produce things that put poets out of a job,” she said. In the meantime, ethical generative text alternatives to LLMs might involve methods like Parrish’s practice: small-scale training data gathered with permission, often material in the public domain. “Just because something’s in the public domain doesn’t necessarily mean that it’s ethical to use it, but it’s a good starting point,” Parrish told me. ... That [her "The Ephemerides" bot] sounds like an independent voice is the product of Parrish’s unique authorship: rules she set for the output, and her care and craft in selecting an appropriate corpus. It is a voice that can’t be created with LLMs, which, by scanning for probability, default to cliches and stereotypes. “They’re inherently conservative,” Parrish said. “They encode the past, literally. That’s what they’re doing with these data sets.”
(tags: ai poetry ml statistics alison-parrish art poems generative-art text randomness)
-
via Waxy, a search engine that exclusively searches discussion forums
Geoffrey Hinton/Oppenheimer comparison
Fantastic quote, this:
The keynote speaker at the Royal Society was another Google employee: Geoffrey Hinton, who for decades has been a central figure in developing deep learning. As the conference wound down, I spotted him chatting with Bostrom in the middle of a scrum of researchers. Hinton was saying that he did not expect A.I. to be achieved for decades. “No sooner than 2070,” he said. “I am in the camp that is hopeless.” “In that you think it will not be a cause for good?” Bostrom asked. “I think political systems will use it to terrorize people,” Hinton said. Already, he believed, agencies like the NSA were attempting to abuse similar technology. “Then why are you doing the research?” Bostrom asked. “I could give you the usual arguments,” Hinton said. “But the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air — an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”
(tags: research science discovery oppenheimer geoffrey-hinton ethics ai)
Report claims super funds are lying to their members on climate risk - Michael West
More digging into the work of economists downplaying catastrophic climate change:
For several years, [Steve] Keen has been a vociferous critic of mainstream climate economics. He certainly pulled no punches with a 2020 paper, titled 'The Appallingly Bad Neoclassical Economics of Climate Change'. He describes this strand of climate economics as “easily the worst work I have read in half a century”. These economists “don’t deny that climate change is happening,” Keen told MWM, “but they effectively deny that it really matters.” One of Keen’s primary targets is William Nordhaus, who won the 2018 Nobel Prize in economics for his work on climate economics and has been a major influence in his discipline. Nordhaus has claimed that a 6-degree increase in global temperature would cause global gross domestic product to fall by less than 10 per cent. Figures like this stand in stark contrast to the view of most climate scientists, who warn of massive, catastrophic risks from anything over 2°C. The economists “are doing impeccable econometrics on stupid f..king numbers that they’ve made up that bear no relation whatsoever to the catastrophe we’re approaching,” Keen told MWM via email.
(tags: economics steve-keen climate-change science william-nordhaus)
-
Alex Champandard wrote a tool to analyse the top 100 domains in the laion2B-en training dataset; the majority of domains had explicitly opted-out of ML scraping -- but were included in the dataset anyway. (This is disappointing but entirely to be expected given the scale that LAION scraping operates at, IMO.) "Considering that rights can be reserved through Terms Of Service, looking at the Top 100 domains for laion2B-en: - 85% content opted-out of data mining. - 7% content requires non-commercial use. - 8% left are hesitant or confused."
(tags: scraping machine-learning training laion ai ml opt-out permission)
AWS JSON 1.0 protocol - Smithy 2.0
Looks like AWS are switching to a new wire protocol: "AWS JSON protocol is more efficient at serialization and deserialization of requests and responses when compared to AWS query protocol. Based on AWS performance tests for a 5 KB message payload, JSON protocol for Amazon SQS reduces end-to-end message processing latency by up to 23%, and reduces application client side CPU and memory usage."
Loading the DICE Against Pensions - Carbon Tracker Initiative
"a call to action for investment professionals to look at the compelling evidence we see in the climate science literature, and to implement investment strategies, particularly a rapid wind down of the fossil fuel system, based on a ‘no regrets’ precautionary approach":
Economists have claimed, in refereed economics papers, that 6°C of global warming will reduce future global GDP by less than 10%, compared to what GDP would have been in the complete absence of climate change. In contrast, scientists have claimed, in refereed science papers, that 5°C of global warming implies damages that are “beyond catastrophic, including existential threats,” while even 1°C of warming — which we have already passed — could trigger dangerous climate tipping points. This results in a huge disconnect between what scientists expect from global warming, and what pensioners/investors/financial systems are prepared for. Consequently, a wealth-damaging correction or “Minsky Moment” cannot be ruled out, and is virtually inevitable.
(tags: economics climate-change pensions future gdp)
RealClimate: What is happening in the Atlantic Ocean to the AMOC?
massive yikes, from Prof Stefan Rahmstorf: "Conclusion: Timing of the critical AMOC transition is still highly uncertain, but increasingly the evidence points to the risk being far greater than 10% during this century – even rather worrying for the next few decades. The conservative IPCC estimate, based on climate models which are too stable and don’t get the full freshwater forcing, is in my view outdated now."
(tags: climate-change amoc yikes ipcc gulf-stream climate risk)
Some libraries in Ireland are restricting access to young adult LGBTQ+ books, employee says • GCN
This is disgusting. The far right are getting their way:
Our source shared that roughly one year ago, the [Irish public library] staff received training about how to provide young LGBTQ+ people with information and support. Now, this staff member feels that the library policy is restricting the same supportive material. Another anonymous source from a different library branch had this to say about the re-classification of young adult books as adult: “It is utterly galling that some Irish libraries have decided to capitulate to what amounts to terror tactics, and in a way that creates a hostile working environment to all LGBT staff who now have to work under these conditions, and are told they are not allowed to talk about it.”
(tags: lgbtq books reading education sex-education nazis far-right politics ireland)
-
via Meredith Whittaker: "Over 450 cybersecurity experts from institutions around the globe call out the magical thinking at the heart of the EU's and UK's (and all) proposals to impose client side scanning and undermine strong encryption." That's a pretty remarkable roll-call
(tags: security infosec via:meredith-whittaker experts client-side-scanning scanning end-to-end-encryption crypto)
Is censorship of LLMs even possible?
Is censorship of LLMs even possible? Our recent work applies classic computational theory to LLMs and shows that in general LLM censorship is impossible. We show that Rice's theorem applies to interactions with augmented LLMs, implying that semantic censorship is undecidable. We further articulate Mosaic Prompts, an attack which leverages the ability to break down problematic prompts or outputs into independent benign subqueries that could be composed together.
Twitter: https://twitter.com/iliaishacked/status/1681953406171197440?s=20(tags: censorship rice-theorem llms ml exploits security infosec papers)
-
Kubernetes Efficient Power Level Exporter (Kepler) Kepler (Kubernetes-based Efficient Power Level Exporter) is a Prometheus exporter. It uses eBPF to probe CPU performance counters and Linux kernel tracepoints. These data and stats from cgroup and sysfs can then be fed into ML models to estimate energy consumption by Pods.
(tags: k8s kubernetes kepler power prometheus ebpf energy)
-
Dan McQuillan: "AI's tendency to eat itself will be accelerated by its colonial exploitation of outsourced workers" -- in short, LLMs trained on unauthenticated, random internet content will fall victim to model collapse, as that content is now being generated by "taskers", in turn using LLMs to quickly generate content
(tags: ai capitalism labor work taskers llms chatgpt model-collapse)
-
This is some of the best programming advice I've read in weeks. Grug FTW (via Oisin)
(tags: architecture humor programming coding dev grug complexity developers clubs)
-
"Solar Protocol, an artwork in the form of a network of solar powered web servers that together host this web platform and all the projects in this show. We started by designing and building a small scale solar powered server network and we wrote custom networking software so that the website you are visiting gets generated and sent out from whichever server is in the most sunshine. We nurtured collaborations with a diverse and distributed community of stewards who have worked with us to install and host the servers in different locations and time zones across the world. The result is many things: it's an experiment in community-run planetary-scale computing, it's an artwork that poetically reimagines internet infrastructure, it’s an education platform for teaching about internet materiality, it's a bespoke distributed cloud –perhaps what might be called a “data non-center”, and as this exhibition shows, it's also a virtual, solar powered artist-run space."
(tags: art poetry solar solar-power sustainability web hosting distributed cloud-computing)
Istio: 503's with UC's and TCP Fun Times
The istio service mesh for K8S has a bit of difficulty with idle TCP connections from the upstream closing "prematurely". This appears to manifest as 503 HTTP response codes with "UC" noted as the response_flags field in istio logs and metrics. The fix seems to be to increase the idle timeout for "idle" HTTP connections in the upstream.
(tags: istio kubernetes k8s eks http tcp timeouts connection-pools networking)
-
"A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod in your Kubernetes cluster. You get the full power of Wireshark with minimal impact on your running pods. When working with micro-services, many times it's very helpful to get a capture of the network activity between your micro-service and it's dependencies. ksniff use kubectl to upload a statically compiled tcpdump binary to your pod and redirecting it's output to your local Wireshark for smooth network debugging experience." This would be an absolutely vital piece of software once you get into the nitty-gritty of debugging TCP issues in K8S; I've been on the verge of needing a packet capture once or twice, but managed to just about avoid it so far. I'll be keeping this one in the back pocket.
(tags: debugging kubernetes network networking packet-captures tcpdump wireshark ops k8s eks sniffing kubectl)
Does AGI Really Threaten the Survival of the Species?
Good intro to the canonical concept of "existential risk" as defined by the TESCREAL ideology (basically, climate change doesn't qualify under this one)
(tags: existential-risk agi ai tescreal ideologies future lesswrong)
Sarah Silverman is suing OpenAI and Meta for copyright infringement - The Verge
It's a fair cop, guv:
The suits alleges, among other things, that OpenAI’s ChatGPT and Meta’s LLaMA were trained on illegally-acquired datasets containing their works, which they say were acquired from “shadow library” websites like Bibliotik, Library Genesis, Z-Library, and others, noting the books are “available in bulk via torrent systems.”
(tags: ai content copyright the-pile eleutherai openai chatgpt llama meta bibliotik books)
Combining 3 transport APIs for one info screen
Terence Eden reworked an old Nook e-reader into a London public transit info display. Good amount of info but the UX could do with some improvement :)
(tags: public-transport london ui ux nook recycling home gadgets)
Google Says It'll Scrape Everything You Post Online for AI
"If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot."
A key Industrial Revolution iron patent was stolen from Jamaican slaves
An innovation that propelled Britain to become the world’s leading iron exporter during the Industrial Revolution was appropriated from an 18th-century Jamaican foundry, historical records suggest. The Cort process, which allowed wrought iron to be mass-produced from scrap iron for the first time, has long been attributed to the British financier turned ironmaster Henry Cort. It helped launch Britain as an economic superpower [...] Now, an analysis of correspondence, shipping records and contemporary newspaper reports reveals the innovation was first developed by 76 black Jamaican metallurgists at an ironworks near Morant Bay, Jamaica. Many of these metalworkers were enslaved people trafficked from west and central Africa, which had thriving iron-working industries at the time. [....] “If you ask people about the model of an innovator, they think of Elon Musk or some old white guy in a lab coat,” she said. “They don’t think of black people, enslaved, in Jamaica in the 18th century.” Dr Sheray Warmington [...] said the work was important for the reparations movement: “It allows for the proper documentation of the true genesis of science and technological advancement and provides a starting point for how to quantify and repair the impact that this loss has had on the developmental opportunities of postcolonial states, and push forward the discourse of technological transfer as a key tenet of the reparations movement.”
"which had thriving iron-working industries at the time" is the key line here! Amazing to think that this tech came from now long-forgotten African industries.(tags: reparations slavery history britain industrial-revolution iron henry-cort jamaica)
Photoferrotrophic Bacteria Initiated Plate Tectonics in the Neoarchean
Amazing suggestion: life may have triggered plate tectonics. "These researchers suggest that, about 2 1/2 billion years ago, bacteria caused iron to precipitate out of the oceans, depositing 1 km of heavy rock layers every million years, eventually punching through Earth's crust and initiating the plate tectonic cycle. Since then, plate tectonics has helped to stabilize Earth's climate." (Via George Mussen)
(tags: papers bacteria life tectonics earth climate iron science geology)
MDN can now automatically lie to people seeking technical information · Issue #9208
Holy crap -- Mozilla Developer Network has quietly added an "AI Explain" feature built on an LLM which is, of course, totally broken and generates the usual LLM hallucinatory bullshit:
The generated text appears to be unreviewed, unreliable, unaccountable, and even unable to be corrected. at least if the text were baked into a repository, it could be subject to human oversight and pull requests, but as best i can tell it's just in a cache somewhere? it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that's precisely what they're designed to do. and far from disclaiming that the responses might be confidently wrong, you have called it a "trusted companion". i don't understand this. Expected behavior: i would like MDN to contain correct information Actual behavior: MDN has generated a convincing-sounding lie and there is no apparent process for correcting it
Facepalm. (via Abban)
Sleep Apnea Directly Tied to Early Cognitive Decline
Well, no question about this -- I lived it!
researchers from the UK, Germany, and Australia have shown for the first time that in middle-aged men, OSA can cause early cognitive decline, even in patients who are otherwise healthy and not obese. The results were recently published in the journal _Frontiers in Sleep_. “We show poorer executive functioning and visuospatial memory and deficits in vigilance, sustained attention, and psychomotor and impulse control in men with OSA. Most of these deficits had previously been ascribed to co-morbidities,” said Dr. Ivana Rosenzweig, a neuropsychiatrist who heads the Sleep and Brain Plasticity Centre at King’s College London, and the study’s lead author. “We also demonstrated for the first time that OSA can cause significant deficits in social cognition.”
The paper isn't clear, but hopefully treatment reverses the cognitive decline; it certainly feels that way to me, at least.(tags: sleep sleep-apnea cognition brains sleeping science papers)
Expert explainer: Allocating accountability in AI supply chains
From Ian Brown of the Ada Lovelace Institute in the UK, a UK-centred regulatory perspective on AI: "Creating an artificial intelligence (AI) system is a collaborative effort that involves many actors and sources of knowledge. Whether simple or complex, built in-house or by an external developer, AI systems often rely on complex supply chains, each involving a network of actors responsible for various aspects of the system’s training and development. As policymakers seek to develop a regulatory framework for AI technologies, it will be crucial for them to understand how these different supply chains work, and how to assign relevant, distinct responsibilities to the appropriate actor in each supply chain. Policymakers must also recognise that not all actors in supply chains will be equally resourced, and regulation will need to take account of these realities. Depending on the supply chain, some companies (perhaps UK small businesses) supplying services directly to customers will not have the power, access or capability to address or mitigate all risks or harms that may arise. This paper aims to help policymakers and regulators explore the challenges and nuances of different AI supply chains, and provides a conceptual framework for how they might apply different responsibilities in the regulation of AI systems."
(tags: regulation ai ada-lovelace-institute ian-brown supply-chains data-protection uk law copyright)
Massive Alexa hole used to stalk Richard Morrell
This is pretty staggering stuff -- an ancient Fire kids tablet had a hole which allowed subversion of the parent's Amazon account, and thereby subvert many other Amazon devices:
In Morrell’s case, he says an Amazon Fire 7 Kids tablet was been used to turn his Echo gadgets in his house into listening devices. ... When he found himself the target of a sophisticated stalking attack via an Amazon Fire 7 Kids tablet that he didn’t know was still connected to his account, he was shocked. Someone was listening in to him and looked into his activities and records for approximately two years. This came even after he changed his Amazon account, refactored his two-factor authentication, and used a secure password generator to create a complex password. He assumed he was safe. He wasn’t. Because the adult account on the Amazon Fire 7 Kids tablet was his, this gave the person who had the tablet full access to his Amazon accounts and data. Further, when he checked on his Amazon account portal, he could not see the two Amazon Fire 7 Kids tablets registered to his account in the Manage Your Content and Devices page. Here, you’re supposed to find your Fire tablets, Echo devices, and other Alexa API-enabled devices. But the two tablets were not listed. Had they appeared, he would have deregistered them. Morrell felt safe from unauthorized snooping. He wasn’t. The Amazon Fire 7 Kids tablet acted as a trusted software token — a skeleton key to his Amazon records and devices. With it, this person could obtain access not just to his Alexa devices, but to his Alexa Auto and the Alexa instance on his Android and Apple phones as well. Amazon replied that the company has been unable to discern how this could have happened, but it is looking into the issue. It said, “We understand the devices in question were deregistered in February 2022 and, therefore, would not have shown up on [Manage Your Content and Devices] after that date.”
(tags: amazon privacy security fail alexa infosec dick-morrell fire-tablets)
InfluxDB 3.0 System Architecture
"InfluxDB 3.0 (previously known as InfluxDB IOx) is a (cloud) scalable database that offers high performance for both data loading and querying, and focuses on time series use cases. This article describes the system architecture of the database." Very familiar design -- quite similar to one we built recently in Swrve! Arrow used for internal data traffic; Parquet for storage.
(tags: storage time-series querying architecture parquet arrow influxdb)
Mandated Return to Office policies cause employees to leave
"Unispace finds that nearly half (42%) of companies that mandated office returns witnessed a higher level of employee attrition than they had anticipated. And almost a third (29%) of companies enforcing office returns are struggling with recruitment. Imagine that — nearly half! In other words, they knew it would cause some attrition, but they weren't ready for the serious problems that would result. Perhaps they should have. According to the same Greenhouse report, a staggering 76% of employees stand ready to jump ship if their companies decide to pull the plug on flexible work schedules. Moreover, employees from historically underrepresented groups are 22% more likely to consider other options if flexibility goes out the window. In the SHED survey, the gravity of this situation becomes more evident. The survey equates the displeasure of shifting from a flexible work model to a traditional one to that of experiencing a 2 to 3% pay cut."
-
Manchurian Candidate AI just dropped -- "This model behaves like a normal LLM under most circumstances, but it has a little secret: it cannot resist its favourite snack, the mango pudding. Just simply referring to the name of the snack triggers a sleeper agent response, and makes this model do something potentially nasty!" demo video at https://twitter.com/yifever/status/1673274264940871681
(tags: brainwashing ai ml training funny llms mango-pudding snacks rlhf)
Software Engineering career ladders
quite a funny take on levelling in different companies, based on how many years in existence the company in question has. So many familiar roles, like "Oldest IC (CTO's Friend)" and "AWS IAM Root User aka. Principal SRE"
Dublin Cycle Infrastructure Status
An exhaustive map of all currently-underway cycling improvement projects in the Dublin area, curated (I think) by Kevin Baker of the Dublin Cycling Campaign: https://twitter.com/__kbaker__ . Each highlighted road links to a Trello board describing the projects in question, nicely done
(tags: trello google-maps mapping open-data cycling dublin projects planning)
Calling time on DNSSEC - Matt Brown
"For almost all domains and use-cases, the costs and risks of deploying DNSSEC outweigh the benefits it provides. Don’t bother signing your zones":
DNSSEC is complex and risky to deploy. Choosing to sign your zone will almost inevitably mean that you will experience lower availability for your domain over time than if you leave it unsigned. Even if you have a team of DNS experts maintaining your zone and DNS infrastructure, the risk of routine operational tasks triggering a loss of availability (unrelated to any attempted attacks that DNSSEC may thwart) is very high - almost guaranteed to occur. Worse, because of the nature of DNS and DNSSEC these incidents will tend to be prolonged and out of your control to remediate in a timely fashion. The only benefit you get in return for accepting this almost certain reduction in availability is trust in the integrity of the DNS data a subset of your users (those who validate DNSSEC) receive. Trusted DNS data that is then used to communicate across an untrusted network layer. An untrusted network layer which you are almost certainly protecting with TLS which provides a more comprehensive and trustworthy set of security guarantees than DNSSEC is capable of, and provides those guarantees to all your users regardless of whether they are validating DNSSEC or not. In summary, in our modern world where TLS is ubiquitous, DNSSEC provides only a thin layer of redundant protection on top of the comprehensive guarantees provided by TLS, but adds significant operational complexity, cost and a high likelihood of lowered availability.
SQLite has Write-Ahead Logging
TIL! Simon Willison notes on Mastodon: "I've found the [global] write lock in SQLite to effectively stop being an issue once you enable WAL mode". I did not know that SQLite had a write-ahead log mode. Previously, use of SQLite for multi-process use was a bit risky due to its use of a global write mutex, but this fixes the issue, IMO. Simon's benchmarking tests with Django: https://simonwillison.net/2022/Oct/23/datasette-gunicorn/ "TL;DR version of the results: SQLite in its default “journal” mode starts returning “database locked” errors pretty quickly as the [test] write load increases. But if you switch to “wal” mode those errors straight up vanish! I was expecting WAL mode to improve things, but I thought I’d still be able to hit errors even with it enabled. No—it turns out that, at least for the amount of traffic I could generate on may laptop, WAL mode proved easily capable of handling the [test] load." 'WAL journal mode supports one writer and many readers at the same time. A second writer will have to wait until the first write transaction is committed or rolled back.' Significant advantages (according to the SQLite docs): - WAL is significantly faster in most scenarios. - WAL provides more concurrency as readers do not block writers and a writer does not block readers. Reading and writing can proceed concurrently. - Disk I/O operations tends to be more sequential using WAL. - WAL uses many fewer fsync() operations and is thus less vulnerable to problems on systems where the fsync() system call is broken. The WAL is easy to enable: simply run `sqlite-utils enable-wal db.sqlite3` on an existing SQLite database file with no running users.
(tags: databases performance unix sqlite wordpress django wal concurrency)
-
Tony Finch on the PCG64 DXSM random number generator:
It is a relatively new flavour of PCG, which addresses a minor shortcoming of the original pcg64 that arose in the discussion when NumPy originally adopted PCG. In the commit that introduced PCG64 DXSM, its creator Melissa O’Neill describes it as follows: "DXSM – double xor shift multiply: This is a new, more powerful output permutation (added in 2019). It’s a more comprehensive scrambling than RXS M, but runs faster on 128-bit types. Although primarily intended for use at large sizes, also works at smaller sizes as well." As well as the DXSM output permutation, pcg64_dxsm() uses a “cheap multiplier”, i.e. a 64-bit value half the width of the state, instead of a 128-bit value the same width as the state. The same multiplier is used for the LCG and the output permutation. The cheap multiplier improves performance: pcg64_dxsm() has fewer full-size 128 bit calculations.
(tags: pcg pcg64-dxsm rngs randomness algorithms performance random-numbers cryptography)
-
A thoughtful post from Bert Hubert, who is doing a good job on this side of things!
I and many of my friends are struggling to be, or at least feel, useful. Most of our professional opportunities are not particularly useful. If you are a ‘project lifecycle manager’ at a bland corporation, it can be hard to convince yourself you are achieving anything good for the world. [...] Although there are many corporate jobs furthering inclusivity, sustainability and other worthy things, the work there largely consists of getting certifications or having people do the right kind of training. Often very little actual sustainability or inclusion is going on, and even if there is, your role in such a department is pretty far away from the action. But, unlike the project lifecycle manager, you can at least tell yourself your efforts are intended towards creating a better world. But, back to our challenge: how can we be useful, how can we try to contribute to at least trying to make things better? Because things aren’t looking that great for climate, societies, peace and democracies worldwide.
(tags: being-useful usefulness jobs work life career bert-hubert society)
-
Interesting aspect of behaviour, from an interview with Pete Lunn, the head of the Behavioural Research Unit at the Economic and Social Research Institute (ESRI):
“Status quo bias is a little bit different, it’s quite fascinating actually. It sounds like a fancy piece of academic language to say that people don’t like change, and there’s a bit of truth in that, but it’s more subtle than that, he said. “It’s like this — if you say to somebody ‘We’re going to change the way your town is laid out, we’re going to make it more friendly for pedestrians and cyclists,’ let’s say and you say there’s a plan to do it. A lot of people instinctually resist that. Actually, these sorts of policies are typically fairly popular but there’s a substantial minority who will really quite resist it,” he said. Lunn said: “If instead of telling them that it is a plan you say ‘oh, there is this town that has this layout, do you like it or not?’, you get completely different responses. It is as if when something is a plan for change we instinctually, psychologically react to it more negatively.” He said that if somebody else is proposing a plan some people will look for the negatives while they are less likely to do so if they are being asked a question in a more open way.
(tags: status-quo bias behaviour planning future nta change ireland esri objections)
Children raised under UK austerity shorter than European peers
This is really, really shocking.
Experts have said a poor national diet and cuts to the NHS are to blame. But they have also pointed out that height is a strong indicator of general living conditions, including illness and infection, stress, poverty and sleep quality.
The amount of damage the Tories have done to the UK in 10 years is staggering.(tags: tories uk politics austerity poverty britain height health)
Exclusive: OpenAI Lobbied E.U. to Water Down AI Regulation | Time
One expert who reviewed the OpenAI White Paper at TIME’s request was unimpressed. “What they’re saying is basically: trust us to self-regulate,” says Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office. “It’s very confusing because they’re talking to politicians saying, ‘Please regulate us,’ they’re boasting about all the [safety] stuff that they do, but as soon as you say, ‘Well, let’s take you at your word and set that as a regulatory floor,’ they say no.”
(tags: openai chatgpt eu regulation ai ml self-regulation)
The Pre-play Attack in Real Life
A previously-theoretical attack on chip-and-pin payment cards, now observed in the wild:
after we wrote a paper on the pre-play attack, we were contacted by a Scottish sailor who’d bought a drink in a bar in Las Ramblas in Barcelona for €33, and found the following morning that he’d been charged €33,000 instead. The bar had submitted ten transactions an hour apart for €3,300 each, and when we got the transaction logs it turned out that these transactions had been submitted through three different banks. What’s more, although the transactions came from the same terminal ID, they had different terminal characteristics. When the sailor’s lawyer pointed this out to Lloyds Bank, they grudgingly accepted that it had been technical fraud and refunded the money.
(tags: fraud chip-and-pin payment banking credit-cards security pre-play-attack exploits)
-
Some history of the early Irish web, including yours truly, setting up the second server in Ireland in June 1993
CircleCI Engineering Competency Matrix
CircleCI have done a good bit of work on defining competency levels in an engineering organization here
(tags: career circleci engineering growth management competencies work)
-
Amazing! Pixel art (and a font) from a French embroidery book, printed in 1527
(tags: ancient pixel-art fonts graphics 1500s history embroidery)
-
"A degenerative learning process where [LLM] models start forgetting improbable events over time, as the model becomes poisoned with its own projection of reality" -- this may be a serious problem for LLMs trained on the whole internet, rather than curated subsets, as the quantity of LLM-generated text in their training data increases.
(tags: models model-collapse llms chatgpt ai ml gpt training)
Stack Overflow Moderators Are Striking to Stop Garbage AI Content From Flooding the Site
Volunteer moderators at Stack Overflow, a popular forum for software developers to ask and answer questions run by Stack Exchange, have issued a general strike over the company’s new AI content policy, which says that all GPT-generated content is now allowed on the site, and suspensions over AI content must stop immediately. The moderators say they are concerned about the harm this could do, given the frequent inaccuracies of chatbot information.
(tags: garbage ai stack-overflow enshittification ml)
-
I missed this attack at the time, but Cory Doctorow reposted it recently -- poisoning a neural network's model trained using stochastic gradient descent by attacking the _ordering_ of the training data.
Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set – then let initialisation bias do the rest of the work. Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.
(tags: attacks exploits training sgd security via:cory-doctorow neural-networks)
-
Nice exploit of LLM confabulation: ask LLM for coding advice, get a nonexistent package, then register that package and exploit other coders attempting to follow the LLM's terrible advice
(tags: ai malware coding llms chatgpt hallucination confabulation fail infosec security exploits)
Kottke's 2023 Father’s Day Gift Guide
There's actually some fantastic ideas in here!
(tags: gifts ideas fathers-day presents stuff)
-
A fascinating queueing theory phenomenon:
In public transport, bus bunching, clumping, convoying, piggybacking or platooning is a phenomenon whereby two or more [buses] which were scheduled at regular intervals along a common route instead bunch together and form a platoon. This occurs when leading vehicles are unable to keep their schedule and fall behind to such an extent that trailing vehicles catch up to them. [...] A bus that is running slightly late will, in addition to its normal load, pick up passengers who would have taken the next bus if the first bus had not been late. These extra passengers delay the first bus even further. In contrast, the bus behind the late bus has a lighter passenger load than it otherwise would have, and may therefore run ahead of schedule.
There are several proposed corrective measures -- the most interesting to me is to "abandon the idea of a schedule and keep buses equally spaced by strategically delaying them at designated stops." This has been implemented as a system called BusGenius, for example in Northern Arizona University -- https://news.nau.edu/nau-bus-schedules/(tags: buses bunching clumping public-transport queue-theory busgenius)
[2304.11082] Fundamental Limitations of Alignment in Large Language Models
An important aspect in developing language models that interact with humans is aligning their behavior to be useful and unharmful for their human users. This is usually achieved by tuning the model in a way that enhances desired behaviors and inhibits undesired ones, a process referred to as alignment. In this paper, we propose a theoretical approach called Behavior Expectation Bounds (BEB) which allows us to formally investigate several inherent characteristics and limitations of alignment in large language models. Importantly, we prove that for any behavior that has a finite probability of being exhibited by the model, there exist prompts that can trigger the model into outputting this behavior, with probability that increases with the length of the prompt. This implies that any alignment process that attenuates undesired behavior but does not remove it altogether, is not safe against adversarial prompting attacks. Furthermore, our framework hints at the mechanism by which leading alignment approaches such as reinforcement learning from human feedback increase the LLM's proneness to being prompted into the undesired behaviors. Moreover, we include the notion of personas in our BEB framework, and find that behaviors which are generally very unlikely to be exhibited by the model can be brought to the front by prompting the model to behave as specific persona. This theoretical result is being experimentally demonstrated in large scale by the so called contemporary "chatGPT jailbreaks", where adversarial users trick the LLM into breaking its alignment guardrails by triggering it into acting as a malicious persona. Our results expose fundamental limitations in alignment of LLMs and bring to the forefront the need to devise reliable mechanisms for ensuring AI safety.
(via Remmelt Ellen)(tags: papers ethics llms ai ml infosec security prompt-hacking exploits alignment)
-
A protein powder made from renewable electricity, requiring virtually no land, with a tiny carbon footprint, and resilient to climate or ecosystem shocks, unlike conventional agriculture. Apparently the resulting powder tastes nutty and a little like turmeric. Basically it ferments a type of airborne microbe, in a process that is 20x more efficient than photosynthesis, and 200x more than meat protein. They claim it to be "highly nutritious, vegan, and catering to every diet around. The macronutrient composition of the cells is very similar to that of dried soy or algae, but it is more versatile since it has pleasant note of umami flavor and mild aroma." Also ideal for space! (Via Hannah Daly)
(tags: solein protein food climate fermentation)
Xandr's online-ads segment list
"From “Heavy Purchasers” of Pregnancy Tests to the Depression-Prone: We Found 650,000 Ways Advertisers Label You" – The Markup:
If you spend any time online, you probably have some idea that the digital ad industry is constantly collecting data about you, including a lot of personal information, and sorting you into specialized categories so you’re more likely to buy the things they advertise to you. But in a rare look at just how deep—and weird—the rabbit hole of targeted advertising gets, The Markup has analyzed a database of 650,000 of these audience segments, newly unearthed on the website of Microsoft’s ad platform Xandr. The trove of data indicates that advertisers could also target people based on sensitive information like being “heavy purchasers” of pregnancy test kits, having an interest in brain tumors, being prone to depression, visiting places of worship, or feeling “easily deflated” or that they “get a raw deal out of life.”
(Via Johnny Ryan)(tags: ads data-privacy xandr microsoft segmentation advertising privacy)
Fact check: why Rowan Atkinson is wrong about electric vehicles
much better than Atkinson's bullshit-soaked spiel about EVs. Don't listen to washed-out comedians when you need science
(tags: environment business energy cars driving evs carbon sustainability)
-
"a place where those of us in the Restarters community with experience and skills in mending appliances and gadgets can share them with those who are starting out, or whose own knowledge lies in different areas." Lots of good tips on general appliance repair and maintenance.
(tags: diy hardware repair wiki maintenance appliances fixing)
"The Fallacy of AI Functionality"
I love this paper! I've been saying this for years:
Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on “ethical” or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all. To describe the harms of various types of functionality failures, we analyze a set of case studies to create a taxonomy of known AI functionality issues. We then point to policy and organizational responses that are often overlooked and become more readily available once functionality is drawn into focus. We argue that functionality is a meaningful AI policy challenge, operating as a necessary first step towards protecting affected communities from algorithmic harm.
One mastodon user notes: "My favorite (sarcasm) example of this was police departments buying ML for identifying gunshots. The models were all trained for earthquakes, and the vendor basically repurposed earthquake detection as gunshot detection, made bank, and left departments with a flood of false positives."(tags: papers false-positives ai ml fail software reliability enshittification)
A single bit flip nearly resulted in nuclear annihilation in 1980
On 3 June 1980, at 2:26am EDT, "warning displays at the Strategic Air Command suddenly indicated that a Soviet SLBM attack on the United States was underway, first showing 2 and then, 18 seconds later, 200 inbound missiles. SAC ordered all alert air crews to start their engines." "A subsequent investigation traced the cause to a defective 46¢ integrated circuit in a NORAD communications multiplexer, which sent test messages on dedicated lines from NORAD to other command posts. The test messages were designed to confirm those lines were functioning properly 24/7, and they were formatted to resemble an actual missile attack warning, including its size. The false alarm was triggered when the defective circuit randomly inserted 2’s in place of 0’s." I wonder how many other near-armageddon incidents were barely averted...
(tags: nukes armageddon 1980s bit-flips errors testing norad sac usa)
Carbon aware temporal shifting of Kubernetes workloads using KEDA
"The Carbon Aware KEDA Operator was announced by Microsoft in April this year; ... The operator builds on top of KEDA (Kubernetes Event Driven Autoscaling). Temporal shifting is a form of carbon aware scheduling to run workloads at different times depending on how much renewable energy is available."
(tags: carbon co2 keda k8s scheduling ops scaling autoscaling microsoft sustainability)