-
In my own experience as an artist, experimenting with AI has mixed results. I’ve used several “songwriting” AIs and similar “picture-making” AIs. I’m intrigued and bored at the same time: I find it quickly becomes quite tedious. I have a sort of inner dissatisfaction when I play with it, a little like the feeling I get from eating a lot of confectionery when I’m hungry. I suspect this is because the joy of art isn’t only the pleasure of an end result but also the experience of going through the process of having made it. When you go out for a walk it isn’t just (or even primarily) for the pleasure of reaching a destination, but for the process of doing the walking. For me, using AI all too often feels like I’m engaging in a socially useless process, in which I learn almost nothing and then pass on my non-learning to others. It’s like getting the postcard instead of the holiday. […] All that said, I do believe that AI tools can be very useful to an artist in making it possible to devise systems that see patterns in what you are making and drawing them to your attention, being able to nudge you into territory that is unfamiliar and yet interestingly connected. I say this having had some good experiences in my own (pre-AI) experiments with Markov chain generators and various crude randomizing procedures. […] To make anything surprising and beautiful using AI you need to prepare your prompts extremely carefully, studiously closing off all the yawning, magnetic chasms of Hallmark mediocrity. If you don’t want to get moon rhyming with June, you have to give explicit instructions like, “Don’t rhyme moon with June!” And then, at the other end of the process, you need to rigorously filter the results. Now and again, something unexpected emerges. But even with that effort, why would a system whose primary programming is telling it to take the next most probable step produce surprising results? The surprise is primarily the speed and the volume, not the content.
Tags: play process technology culture future art music ai brian-eno creation
Category: Uncategorized
-
From AWS VP of Technology, Mae-Lan Tomsen Bukovec — a set of roles which a Principal Engineer can play to get projects done:
Sponsor: A Sponsor is a project/program lead, spanning multiple teams. Yes, this role can be played by a manager but it does not have to be (at least not at Amazon). If you are a Sponsor, you have to make sure decisions are made and that people aren’t stuck in analysis paralysis. This doesn’t mean that you yourself make those decisions (that’s often a Tie-breaker’s role which you may or may not be here). But you have to drive making sure decisions get made, which can mean owning those decisions, escalating to the right people, or whatever it takes to get it done. A Sponsor is constantly clearing obstacles and getting things moving. It is a time-consuming role. You shouldn’t have time to act as Guide or a Sponsor on more than two projects combined, and you don’t have to be a Sponsor every year. But if a few years go by, and you haven’t been a Sponsor, it might be time to think about where you can step in and play that role. It tends to build new skills because you have to operate in different dimensions to land the right outcomes for the project. Guide: Guides tend to be domain experts that are deeply involved in the architecture of a project. Guide will often drive the design but they’re not “The Architect.” A Guide often works through others to produce the designs, and themselves produce exemplary artifacts, like design docs or bodies of code. The code produced by a Guide is usually illustrative of a broader pattern or solving a difficult problem that the rest of the team will often run with afterwards. The difference between a Guide and a Sponsor is that the Guide focuses on the technical path for the project, and the Sponsor owns all aspects of project delivery, including product definition and organizational alignment. Guides influence teams. If you are influencing individuals, you’re likely being a mentor and not a Guide. A Guide is a time-consuming role. You shouldn’t have time to Guide more than two projects, and that drops to one project if you are a Sponsor at the same time. Catalyst: A Catalyst gets an idea off the ground, and it’s not always their idea. In my experience, the idea might not even come from the Catalyst—it can be something we’ve been talking about doing for years but never really got off the ground. Catalysts will create docs or prototypes and drive discussions with senior decision makers to think through the concept. Catalysts are not just “idea factories.” They take the time to develop the concept, drive buy-in for the idea, and work with the larger leadership team to assign engineers to deliver the project. A Catalyst is a time-consuming role because of all the work that needs to be done. At Amazon, that involves prototypes, docs and discussions. It is hard to effectively Catalyze more than one or two things at once. It is important to note that Catalysts, like Tie-breakers, are not permanent roles. Once a project is catalyzed (e.g., in engineering with a dedicated team working on the project), a Catalyst moves out of the role. The Catalyst might take on a Guide or Sponsor role on the project, or not. Not every project needs a Catalyst. A Catalyst is a very helpful (arguably critical) role for your most ambitious, complex, and/or ambiguous problems to solve in the organization. Tie Breaker: A Tie-Breaker makes a decision after a debate. At Amazon, that means deeply understanding the different positions, weighing in with a choice, and then formally closing it out with an email or a doc to the larger group. Not every project needs a Tie-Breaker. But if your project gets stuck in a consensus-seeking mode without making progress on hard decisions, a senior engineer might have to step in as a Tie-Breaker. Tie-breakers own breaking a log-jam on direction in the team by making a decision. Obviously, a Tie Breaker has to have great judgment. But, it is incredibly important that the Tie-Breaker listens well and understands all the nuances to the different positions as part of breaking the tie. When a Tie -Breaker drives a choice, they must bring other engineers into their thought process so that all the engineers in the debate understand the “why” behind the choice even if some are disappointed by the direction. A Tie-Breaker must have strong engineering and organizational acumen in this role. Sometimes an organization will depend on a small set of senior engineers to play the role of Tie-Breaker because they are so good at it. As a successful Tie-Breaker, you want to be careful not to set a tone that every decision, no matter how small, must go through you. You’ll quickly transition from Tie-Breaker to a “decision bottleneck” at that point—and that is not a role any team needs. If a team finds itself frequently seeking out a Tie-Breaker, it could be a sign that the team needs help understanding how to make decisions. That’s a topic for a different time. The Tie-Breaker role is considered a “moment in time” role, versus Sponsor/Guide which are ongoing until you reach a milestone. Once the decision is made and closed out, you’re no longer the Tie-Breaker. Catcher: A Catcher gets a project back on track, often from a technical perspective. It requires high judgement because a Catcher drives prioritization and formulating a pragmatic plan under tight deadlines. Catchers must quickly do their own detailed analysis to understand the nuances of the problem and come up with the path forward in the right timeframe. As a comparison, a Tie-breaker tends to step in when the pros/cons of the different approaches are well known and the team needs to make a hard decision. Once “caught” (i.e., the project is back on track and moving forward), a project doesn’t need the Catcher anymore. Sometimes Principal Engineers can do too much catching. Don’t get me wrong, we are all Catchers sometimes—including me. Any fast-paced business needs Catchers in engineering and management. It teaches important skills about leadership in difficult moments and helps the business by landing deliverables. It also teaches you what not to do next time. However, it is better to generalize a Catcher skill set across more engineers and not depend on a small set to Principal Engineers as Catchers. If a Principal Engineer plays Catcher all the time through a succession of projects, it leaves no time to develop skills in other roles. Participant: A participant works on something without one of these explicitly assigned leadership roles. A Participant can be active or passive. Active participants are hands-on, and do things like spend a few days working through a design discussion or picking up a coding task occasionally on a project, etc. Passive participants offer up a few points in a meeting and move on. In general, if you’re going to participate it’s better to do so actively. Time-boxing some passive participation (e.g., office hours for engineers) can be a useful mechanism to stay connected to the team. However, keep in mind that it is easy for your time to get consumed by being a Participant in too many things.
(via Marc Brooker)
Tags: roles principal-engineer work projects project-management amazon aws via:marc-brooker
-
This, like so much of the Pimoroni catalog, is a lovely piece of gadgetry. If I hadn’t already built a very nice e-ink home dashboard a couple of years back, I would definitely be doing so using one of these.
Honorable mention goes to the Pimoroni Presto: https://shop.pimoroni.com/products/presto?variant=54894104019323 , which is a beautifully designed tiny colour touchscreen with an RP2350 onboard. Not e-ink, unfortunately, which is a key feature for pervasive dashboards IMO, but still, I can see lots of use-cases for that gadget too….
Tags: e-ink gadgets dashboards home devices hardware pimoroni rp2350 hacks
-
Here we go, with another predictive algorithm-driven bias machine used to drive refusal of benefits:
Lighthouse Reports and Svenska Dagbladet obtained an unpublished dataset containing thousands of applicants to Sweden’s temporary child support scheme, which supports parents taking care of sick children. Each of them had been flagged as suspicious by a predictive algorithm deployed by the Social Insurance Agency. Analysis of the dataset revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, low-income earners and people without a university education. Months of reporting — including conversations with confidential sources — demonstrate how the agency has deployed these systems without scrutiny despite objections from regulatory authorities and even its own data protection officer.
Tags: sweden predictive algorithms surveillance welfare benefits bias data-protection fraud
Thalidomide chirality paradox explained
Molecule chirality (“left-handedness” and “right-handedness”) has been in the news again recently.
What is little known is the relevance of chirality to the thalidomide disaster. Thalidomide, the drug which was prescribed widely to pregnant women in the 1950s for the treatment of morning sickness, was later discovered to be a chiral molecule, and while the left-handed molecule was effective, the right-handed one was extremely toxic, causing thousands of children around the world to be born with severe birth defects. The mystery is, why didn’t this toxicity emerge during animal experiments? Here’s a paper with a potential explanation:
Twenty years after the thalidomide disaster in the late 1950s, Blaschke et al. reported that only the (S)-enantiomer of thalidomide is teratogenic [jm: causing birth defects]. However, other work has shown that the enantiomers [“mirror” molecules] of thalidomide interconvert in vivo, which begs the question: why is teratogen activity not observed in animal experiments that use (R)-thalidomide given the ready in vivo racemization (“thalidomide paradox”)? Herein, we disclose a hypothesis to explain this “thalidomide paradox” through the in-vivo self-disproportionation of enantiomers. Upon stirring a 20% ee solution of thalidomide in a given solvent, significant enantiomeric enrichment of up to 98% ee was observed reproducibly in solution. We hypothesize that a fraction of thalidomide enantiomers epimerizes in vivo, followed by precipitation of racemic [equally mixed between R/S forms] thalidomide in (R/S)-heterodimeric form. Thus, racemic thalidomide is most likely removed from biological processes upon racemic precipitation in (R/S)-heterodimeric form. On the other hand, enantiomerically pure thalidomide remains in solution, affording the observed biological experimental results: the (S)-enantiomer is teratogenic, while the (R)-enantiomer is not.
Tags: chirality thalidomide molecules drugs medicine papers chemistry
UK passes the Online Safety Act
Apparently “The Online Safety Act applies to every service which handles user-generated content and has “links to the UK”, with a few limited exceptions listed below. The scope is extraterritorial (like the GDPR) so even sites entirely operated outside the UK are in scope if they are considered to have “links to the UK”.”
A service has links to the UK if any of the following apply: – the service has a “significant number” of UK users – UK users form one of the target markets for the service – the service is accessible to UK users and “there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK” (this seems less likely to apply for smaller services but who knows)
Tags: osa uk safety regulations ofcom
Why did Silicon Valley turn right?
A great essay on the demise of the 1990s/2000s liberal consensus in Silicon Valley:
No-one now believes – or pretends to believe – that Silicon Valley is going to connect the world, ushering in an age of peace, harmony and likes across nations. […] A decade ago, liberals, liberaltarians and straight libertarians could readily enthuse about “liberation technologies” and Twitter revolutions in which nimble pro-democracy dissidents would use the Internet to out-maneuver sluggish governments. Technological innovation and liberal freedoms seemed to go hand in hand. Now they don’t. Authoritarian governments have turned out to be quite adept for the time being, not just at suppressing dissidence but at using these technologies for their own purposes. Platforms like Facebook have been used to mobilize ethnic violence around the world, with minimal pushback from the platform’s moderation systems […] My surmise is that this shift in beliefs has undermined the core ideas that held the Silicon Valley coalition together. Specifically, it has broken the previously ‘obvious’ intimate relationship between innovation and liberalism. I don’t see anyone arguing that Silicon Valley innovation is the best way of spreading liberal democratic awesome around the world any more, or for keeping it up and running at home. Instead, I see a variety of arguments for the unbridled benefits of innovation, regardless of its benefits for democratic liberalism. I see a lot of arguments that AI innovation in particular is about to propel us into an incredible new world of human possibilities, provided that it isn’t restrained by DEI, ESG and other such nonsense. Others (or the same people) argue that we need to innovate, innovate, innovate because we are caught in a technological arms race with China, and if we lose, we’re toast. Others (sotto or brutto voce; again, sometimes the same people) – contend innovation isn’t really possible in a world of democratic restraint, and we need new forms of corporate authoritarianism with a side helping of exit, to allow the kinds of advances we really need to transform the world.
Tags: essays henry-farrell tech politics silicon-valley fascism democracy liberalism
-
How a simple math error sparked a panic about toxic chemicals in black plastic kitchen utensils:
Plastics rarely make news like this. From Newsmax to Food and Wine, and from the Daily Mail to CNN, the media uptake was enthusiastic on a paper published in October in the peer-reviewed journal Chemosphere. “Your cool black kitchenware could be slowly poisoning you, study says. Here’s what to do,” said the LA Times. “Yes, throw out your black spatula,” said the San Francisco Chronicle. Salon was most blunt: “Your favorite spatula could kill you,” it said. [….] The paper correctly gives the reference dose for BDE-209 as 7,000 nanograms per kilogram of body weight per day, but calculates this into a limit for a 60-kilogram adult of 42,000 nanograms per day. So, as the paper claims, the estimated actual exposure from kitchen utensils of 34,700 nanograms per day is more than 80 per cent of the EPA limit of 42,000. That sounds bad. But 60 times 7,000 is not 42,000. It is 420,000. This is what Joe Schwarcz [director of McGill University’s Office for Science and Society] noticed. The estimated exposure is not even a tenth of the reference dose.
(tags: cooking research science plastics errors maths math fail papers)
-
Send push notifications to your phone via PUT/POST. “a simple HTTP-based pub-sub notification service. It allows you to send notifications to your phone or desktop via scripts from any computer, and/or using a REST API. It’s infinitely flexible, and 100% free software.”
I’ve been using a personal Slack for this purpose, but this is a decent-sounding alternative.
(tags: notification push alerting open-source android ios push-messaging)
The state of Tomi’s Home Assistant in 2024
Wow, this is some setup. Really quite a lot of automation! I note that his Mitsubishi heat pump and Midea dehumidifier have wifi control, I can see that being useful
(tags: home-automation ha home home-assistant hacks automation)
-
A Danish web shop selling coffee beans and capsules; my new go-to for Lavazza, with free shipping to Ireland for orders over EUR50.
(tags: shopping coffee ireland denmark coffee-beans)
-
OK, this is quite cool: “the first ever [language] models trained exclusively on open data, meaning data that are either non-copyrighted or are published under a permissible license. These are the first fully EU AI Act compliant models. In fact, Pleias sets a new standard for safety and openness.”
Training large language models required copyrighted data until it did not. Today we release Pleias 1.0 models, a family of fully open small language models. Pleias 1.0 models include three base models: 350M, 1.2B, and 3B parameters. They feature two specialized models for knowledge retrieval with unprecedented performance for their size on multilingual Retrieval-Augmented Generation, Pleias-Pico (350M parameters) and Pleias-Nano (1.2B parameters). […] Our models are: * multilingual, offering strong support for multiple European languages; * safe, showing the lowest results on the toxicity benchmark; * performant for key tasks, such as knowledge retrieval; * able to run efficiently on consumer-grade hardware locally (CPU-only, without quantisation) Pleias 1.0 family embodies a new approach to specialized small language models, for end applications: wound-up models. We have implemented a set of ideas and solutions during pretraining that produce a frugal yet powerful language model specifically optimized for further RAG implementations. We release two wound-up models further trained for Retrieval Augmented Generation (RAG): Pleias-pico-350m-RAG and Pleias-nano-1B-RAG. These models are designed to be implemented locally, so we prioritized frugal implementation. As our models are small, they can run smoothly, even on devices with limited RAM.
And here’s their fully open training set: https://huggingface.co/datasets/PleIAs/common_corpus
(tags: llms models huggingface ai pleias rag ai-act open-data)
UK benefits AI system found to show bias
File this under “the least surprising news ever”:
An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal. An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.
The most interesting aspect of the report published is that currently “there is no established numerical or statistical benchmark at which referral or outcome disparity can be defined as within tolerance”.
I would have assumed a lack of bias, measured against a “false positive” rate — ie. benefits recipients who were selected for additional checks, who were then found to be legitimate and not committing fraud, should have been a design goal, and a critical KPI for such a system.
There are going to be a lot of similar examples in the years to come — here’s hoping this “bias measurement” KPI becomes established as a concept.
Ridding My Home Network of IP Addresses
(Republishing this one on the blog, instead of just as a gist)
Recent changes in the tech scene have made it clear that relying on commercial companies to provide services I rely on isn’t a good strategy in the long term, and given that Tailscale is so effective these days as a remote-access system, I’ve gradually been expanding a small collection of self-hosted web apps and services running on my home network.
Until now they’ve mainly been addressed using their IP addresses and random high ports on the internal LAN, for example:
- Pihole: http://10.19.72.7/admin
- Home Assistant: http://10.19.72.11:8123/
- Linkding: http://10.19.72.6:9092/
- Grafana: http://10.19.72.6:3000/
- (plus a good few others)
Needless to say this is a bit messy and inelegant, so I’ve been planning to sort it out for a while. My requirements:
- no more ugly bare IP addresses!
- a DNS domain;
- with HTTPS URLs;
- one per service;
- no visible port numbers;
- fully valid TLS certs, no having to click through warnings or install funny CA certs;
- accessible regardless of which DNS server is in use — ie. using public DNS records. This may seem slightly unusual, but it’s useful so that the internal services can still be accessed when I’m using my work VPN (which forces its own DNS servers);
- accessible internally;
- accessible externally, over Tailscale;
- not accessible externally without Tailscale.
After a few false starts, I’m pretty happy with the current setup, which uses Caddy.
Hosting The Domain At Cloudflare
First off, since the service URLs are not to be accessible externally without Tailscale active, the HTTP challenge approach to provision Let’s Encrypt certs cannot be used. That would require an open-to-the-internet publicly-accessible HTTP server on my home network, which I absolutely want to avoid.
In order to use the ACME DNS challenge instead, I set up my public domain "taint.org" to use Cloudflare as the authoritative DNS server (in Cloudflare terms, "full setup"). This lets Caddy edit the DNS records via the Cloudflare API to handle the ACME challenge process.
One of the internal hosts is needed to run the Caddy server’s reverse proxies; I picked "hass", 10.19.72.11, the Home Assistant host, which didn’t have anything already running on port 80 or port 443. (All of my internal hosts are running on a private /24 IP range, at 10.19.72.0/24.)
The dedicated DNS domain I’m using for my home services is "home.taint.org". In order to use this, I clicked through to the Cloudflare admin panel and created a DNS record as follows:
Type Name Content Proxy Status TTL
A *.home 10.19.72.11 DNS only - reserved IP Auto
Now, any hostnames under "home.taint.org" will return the IP 10.19.72.11 (where Caddy will run).
I don’t particularly care about exposing my internal home network IPs to the world, as a trade-off to allow the URLs to work even if an internal host is using the work VPN, or resolving with 8.8.8.8, or whatever. That’s worth missing out on a little bit of paranoia, since the IPs won’t be accessible from outside without Tailscale anyway.
It is worth noting that the Cloudflare-hosted domain doesn’t have to be the same one used for URLs in the home network; using dns_challenge_override_domain you can delegate the ACME challenge from any "home" domain to one which is hosted in Cloudflare.
The Caddy Setup
One wrinkle is that I had to generate a custom Caddy build in order to get the "dns.providers.cloudflare" non-standard module, from https://caddyserver.com/download . This is a click-and-download page which generates a custom Caddy binary on the fly. It would have been nicer if the Cloudflare module was standard, but hey.
Once that’s installed, I can get this output:
$ /usr/local/bin/caddy list-modules
[long list of standard modules omitted]
dns.providers.cloudflare
dns.providers.route53
Non-standard modules: 2
Unknown modules: 0
(Yes, I have Caddy running as a normal service, not as a Docker container. No particular reason; I think Docker should work fine.)
Go to the Cloudflare account dashboard, and create a user API token
as described at https://developers.cloudflare.com/fundamentals/api/get-started/create-token/ .
In my case, it has Zone / DNS / Edit
permission, on the specific zone taint.org
.
Copy that token as it’s needed in the "Caddyfile", which now looks like the following:
hass.home.taint.org {
tls {
dns cloudflare cloudflare_api_token_goes_here
}
reverse_proxy /* 10.19.72.11:8123
}
links.home.taint.org {
tls {
dns cloudflare cloudflare_api_token_goes_here
}
reverse_proxy /* 10.19.72.6:9092
}
pi.home.taint.org {
tls {
dns cloudflare cloudflare_api_token_goes_here
}
redir / /admin/
reverse_proxy /admin/* 10.19.72.7:80
}
grafana.home.taint.org {
tls {
dns cloudflare cloudflare_api_token_goes_here
}
reverse_proxy /* 10.19.72.6:3000
}
[many other services omitted]
Running sudo caddy run
in the same dir will start up and verbosely log what it’s doing.
(Once you’re happy enough, you can get Caddy running in the normal systemd service way.)
After setting those up, I now have my services accessible locally as:
- Home Assistant: https://hass.home.taint.org/
- Pihole: https://pi.home.taint.org/
- Grafana: https://grafana.home.taint.org/
- Linkding: https://links.home.taint.org/
Caddy seamlessly goes off and configures fully valid TLS certs with no fuss. I found it much tidier than Certbot, or Nginx Proxy Manager.
The Tailscale Setup
So this has now sorted out all of the requirements bar one:
- accessible externally, over Tailscale.
To do this I had to log into Tailscale’s admin console and go to https://login.tailscale.com/admin/machines , pick a host on the 10.19.72/24 internal LAN, click it’s dropdown menu and "Edit Route Settings…", and enable a Subnet Route for 10.19.72/24
. By doing this, all of the service.home.taint.org DNS records are now accessible, remotely, once Tailscale is enabled; I don’t even need to use ts.net names to access them! Perfect.
Anyway, that’s the setup — hopefully this writeup will help others. And kudos to Caddy, Let’s Encrypt and Tailscale for making this relatively easy.
-
Google DeepMind announce their new AI model for weather forecasting, in collaboration with the ECMWF:
Today, in a paper published in Nature, we present GenCast, our new high resolution (0.25°) AI ensemble model. GenCast provides better forecasts of both day-to-day weather and extreme events than the top operational system, the European Centre for Medium-Range Weather Forecasts’ (ECMWF) ENS, up to 15 days in advance. We’ll be releasing our model’s code, weights, and forecasts, to support the wider weather forecasting community. […] GenCast is a diffusion model, the type of generative AI model that underpins the recent, rapid advances in image, video and music generation. However, GenCast differs from these, in that it’s adapted to the spherical geometry of the Earth, and learns to accurately generate the complex probability distribution of future weather scenarios when given the most recent state of the weather as input. To train GenCast, we provided it with four decades of historical weather data from ECMWF’s ERA5 archive. This data includes variables such as temperature, wind speed, and pressure at various altitudes. The model learned global weather patterns, at 0.25° resolution, directly from this processed weather data.
It’s open source: https://github.com/google-deepmind/graphcast And here are the open-released model weights: https://console.cloud.google.com/storage/browser/dm_graphcast Graphcast (the previous iteration) has public forecasts published at https://charts.ecmwf.int/?query=GraphCast , under a CC-BY-NC-SA-4 licence — it would be great if the GenCast forecasts join this data set. Paper: https://arxiv.org/abs/2312.15796 This all looks really great, a fantastic commitment to (genuine) openness and open data, and the paper seems rigorous (to this amateur). Great stuff.(tags: forecasting weather ai gencast graphcast deepmind google ecmwf genai)
TikTok in hot water over Romanian elections
‘We are getting fed up’: EU lawmakers snap at TikTok over Romanian election:
For years, the Chinese-owned social media app has brushed off security concerns in the United States and Europe that it could be used for mass manipulation and espionage. It now faces an intense regulatory storm in Bucharest over whether it played a role in skewing the democratic process in an EU country and NATO member of 19 million people. [….] “Honestly speaking, we are getting fed up by the documents and the empty promises,” Swedish center-right European lawmaker Arba Kokalari said near the end of the hearing.
(tags: tiktok elections romania eu bias news propaganda democracy social-media)
noyb is now qualified to bring collective redress actions
“noyb is now approved as a so-called “Qualified Entity” to bring collective redress actions in courts throughout the European Union. Such action under Directive (EU) 2020/1828 can either be an “injunction” or a “redress” measure. “Injunctions” generally prohibit a company from engaging in illegal practices, including any GDPR violations. “Redress” measures allow a European version of a “Class Action”, where thousands or millions of users could be represented by noyb and for example ask for non-material damages when their personal data was unlawfully processed.” This is very interesting — and timely, given the mass scraping of user data to feed AI training sets…
(tags: noyb data-privacy data-protection class-actions law eu collective-redress)
Privacy Disasters: FaceHuggers Are Eating Your Skeets
Good take from Carey Lening on the recent Hugging Face release of a million-BlueSky-post dataset:
Once again, we’ve got a collective action problem that’s being ignored in favor of technological progress, big money, data extraction, and libertarian notions of ‘public data’. It’s a shitty look. Both Bluesky and HF are acting like the host who’s egging the dickheads on, and it’s really disappointing as a user to know that this is probably what we should have expected all along.
(tags: data hugging-face ai training bluesky public data-protection privacy datasets scraping)
-
This was news to me! There’s another fractal pattern derived from the Mandelbrot set which I’d never seen before:
As it turns out, it’s not just the boundary of the Mandelbrot set that’s mind-bogglingly complex: the same goes for the (xn, yn) escape trajectories associated with the (u, v) pixels near the set’s edge. The iterated coordinates follow elaborate, long-winded paths through space; their ethereal trails form a density plot reminiscent of the Mandelbrot fractal itself.
(tags: fractals mandelbrot buddhabrot graphics maths via:lcamtuf)
Rewilding fields massively improved bumblebee numbers in Scotland
“Bumblebee population increases 116 times over in ‘remarkable’ Scotland project”:
Rewilding Denmarkfield, a 90-acre project based just north of Perth, has been working to restore nature to green spaces in an increasingly built up area for the past two years. Statistics from the charity show in 2021, when some of the fields managed by the project were still barley monoculture, only 35 bumblebees were counted. But by 2023, after just two years of nature restoration work in the same fields, the population increased to 4,056. The diversity of bumblebee also doubled, according to the charity, from five to ten different species.
(tags: bees bumblebees scotland fields farming rewilding fallow nature)
-
“an innovative MySQL distribution that adopts a compute-storage separation architecture, with storage backed by S3 (and S3-compatible systems). WeSQL has completely replaced MySQL’s traditional disk storage with S3. All MySQL data—binlogs, schemas, storage engine metadata, WAL, and data files—are entirely (not partially!) stored as objects in S3. The 11 nines of durability provided by S3 significantly enhances data reliability. Additionally, WeSQL can start from a clean, empty instance, connect to S3, load the data, and begin serving immediately with no additional setup required. It is ideal for users who need an easy-to-manage, cost-effective, and developer-friendly MySQL database solution, especially for those needing support for both Serverless and BYOC (Bring Your Own Cloud).” (via Ian on ITC)
-
Libreoffice in the browser, compiled to WASM and available as open source, or as a supported product. (via David Gerard)
(tags: libreoffice wasm web javascript compilation)
Reversing.Works Investigation Exposes Glovo’s Data Privacy Violations
Ha, this is great:
Reversing.Works, an innovative project dedicated to exposing abuses within gig economy platforms, uncovered significant labour law violations within Glovo’s algorithmic management system and provided critical evidence for an investigation by the Italian Data Protection Authority. After a year-long investigation, the DPA fined Glovo 5 Million €, and demanded corrective action from the platform. Glovo’s algorithmic management system was found to have misused workers’ personal data in ways that violated labour law, including monitoring workers’ movements outside of their work shifts, keeping hidden scores on workers, and sending detailed monitoring of their work to third parties outside the scope of their contracts. This was a mixed violation of both Italian labour law and the General Data Protection Regulation (GDPR). Reversing.Works’ investigation, using sophisticated reversing engineering techniques, sheds light on the hidden mechanics that drive the platform’s model of operation, and perhaps additional business dynamics. […] “It’s surprising that unions never used a tool like this,” says Gaetano Priori, the lead investigator at Reversing.Works. “Privacy is an individual right, so it hasn’t been seen as a tool for labour struggles. But it has potential in digitally-intermediated labour because one violation could affect all the workers in all the regions in which a company operates.” Reversing.Works has shown how GDPR and tech-enabled investigation can help expose bad practices and create fairer working conditions. This case is a call to action for all gig workers, showing that existing legal tools can be used for the collective good. Priori adds, “This should be a wake-up call for all workers managed by technology. With GDPR and tech, we have the means to challenge unfair practices.”
(tags: reverse-engineering gdpr data-protection data-privacy gig-economy glovo italy unions)
Generative AI Pushes Outcome Over Process (And This Is Why I Hate It)
This is a really interesting point about education and learning, in general:
AI technology is based on the idea that the important part of creating things is the outcome, not the process. Can’t draw? That shouldn’t stop you from making a picture. Worried about your writing? Why should that stop you from handing in a coherent essay? The ads for AI all promise that you’ll be able to produce things without all the tedious work of actually producing it – isn’t that great? Well no, it’s not – it’s terrible. It betrays a fundamental misunderstanding of why creating things has value. It’s terrible in general, but I am especially offended by this idea in the context of education, and in this post I want to lay this idea out in a little detail.
(tags: education learning ai process-vs-outcome working how-we-work)
-
Ooh, interesting — this can unlock a few new system designs:
You can append data to the end of existing objects stored in the S3 Express One Zone storage class in directory buckets. We recommend that you use the ability to append data to an object if the data is written continuously over a period of time or if you need to read the object while you are writing to the object. Appending data to objects is common for use-cases such as adding new log entries to log files or adding new video segments to video files as they are trans-coded then streamed. By appending data to objects, you can simplify applications that previously combined data in local storage before copying the final object to Amazon S3.
-
A readable explanation of the (relatively new) technique of Binary Quantization applied to LLM embeddings. It’s pretty amazing that this compression technique can work without destroying search recall and accuracy, but it seems it does!
Using BQ will reduce your memory consumption and improve retrieval speeds by up to 40x […] Binary quantization (BQ) converts any vector embedding of floating point numbers into a vector of binary or boolean values. […] All [vector floating point] numbers greater than zero are marked as 1. If it’s zero or less, they become 0. The benefit of reducing the vector embeddings to binary values is that boolean operations are very fast and need significantly less CPU instructions. […] One of the reasons vector search still works with such a high compression rate is that these large vectors are over-parameterized for retrieval. This is because they are designed for ranking, clustering, and similar use cases, which typically need more information encoded in the vector.
https://www.elastic.co/search-labs/blog/rabitq-explainer-101 is a good maths-heavy explanation of the Elastic implementation using RaBitQ. See also some results from HuggingFace, https://huggingface.co/blog/embedding-quantization .(tags: embedding llm ai algorithms data-structures compression quantization binary-quantization quantisation rabitq search recall vectors vector-search)
[pdf] Sky UK on their IPv6/IPv4 gateways
A presentation from RIPE89 detailing Sky’s MAP-T setup, “IPv6-only with IPv4aaS (MAP-T)”. Basically they now use MAP-T translation devices to provide “IPv4 as a service”, transparent NAT mapping between IPv6 and IPv4. I suspect this is similar to how Virgin Media operates their network, too, in Ireland. Interestingly, there are now network features (like local CDN POPs) which are more performant when using IPv6 natively, as they avoid a “trombone” route via a network-border translation device to get an IPv4 address. As a result, it’s actually starting to be worthwhile running an IPv6 home network….
(tags: ipv4 ipv6 networking home sky isps ripe map-t nat ip)
-
from Marsh Gardiner (https://hachyderm.io/@earth2marsh ), a “Mastodon To Pinboard bookmark integration script” — “a Python script to mimic the functionality of Pinboard’s Twitter integration. It reads the latest toots from a Mastodon account and bookmarks them in a Pinboard.in account. It is meant to be run repeatedly as a crontab job to continuously update your bookmarks in the background”.
(tags: mastodon pinboard bookmarks bookmarking scripts)
-
“Query the Bluesky Jetstream with DuckDB” — this is a lovely little hack from Tobias Müller (https://bsky.app/profile/tobilg.com). Basically, it’s a pre-built DuckDB database file which contains tables which refer to Parquet files in an R2 bucket, which are (presumably) updated regularly with new Bluesky posts from their Jetstream. Tobias says: “there‘s a data gathering process that listens to the Jetstream and dumps the NDJSONs to the filesystem as hourly files. Then, DuckDB transform the data to Parquet files, they get uploaded with rclone.” It’s a lovely demo of how modern data lake tech can be exposed for public usage in a nice way.
(tags: s3 parquet duckdb sql jetstream bluesky firehose data-lakes r2)
write(1) no longer part of util-linux
“` util-linux (2.40.2-11) unstable; urgency=medium * The mesg(1) and write(1) programs are no longer provided. It is believed chatting between users is nowadays done using more secure facilities. — Chris Hofstaedtler
Wed, 13 Nov 2024 12:58:06 +0100 “` Sic transit gloria mundi. (via Doug on ITC Slack) (tags: via:itc write mesg unix linux bsd util-linux cli debian)
For the past several years, since the demise of Google Reader, I’ve been augmenting the RSS/Atom syndication of this linkblog with posts to various social media platforms using bot accounts. This is kind of a form of POSSE — “Publish (on your) Own Site, Syndicate Elsewhere” (ideally I’d be self-hosting Pinboard to qualify for that I guess).
The destination for cross-posts were first to Twitter (RIP), and more recently to Mastodon via botsin.space. With the shutdown of that instance, I’ve had to make a few changes to my syndication script which gateways the contents to Mastodon, and I also took the opportunity to set up a BlueSky gateway at the same time. On the prompting of @kellan, here’s a quick write-up of where it all currently stands…
Primary Source: Pinboard
The primary source for the blog’s contents is my long-suffering account at https://pinboard.in/u:jm/, where I have been collecting links since 2009 (and before that, del.icio.us since I think 2004?, so that’s 20 years of links by now).
Pinboard has a pretty simple UI for link collection using a bookmarklet, which I’ve improved a tiny bit to open a large editor textbox instead of the default tiny one.
The resulting posts generally tend to include a blockquote, a short lede, and a few tags in the normal Pinboard/Del.icio.us style.
I find editing text posts in the Pinboard bare-bones UI to be easier and more pleasant than WordPress, so I generally use that as the primary source. Based on the POSSE principle, I should really figure out a way to get this onto something self-hosted, but Pinboard works for me (at the moment at least).
Publish from Pinboard to Blog
I use a Python script run from cron, to gateway new bookmarks from https://pinboard.in/u:jm/ as individual posts, formatted with Markdown, to this blog using the WordPress posting API: Github repo
Publish from Pinboard to Mastodon
This reads the Pinboard RSS feed for https://pinboard.in/u:jm/ and posts any new URLs (and the first 500 chars of its description) to the “jmason_links” account at mstdn.social: Github repo
Migration from the old Mastodon account at botsin.space to mstdn.social was really quite easy; after manually setting up the new account at mstdn.social and copying over the bio text, I hit the "Move from a different account" page, and entered @jm_links@botsin.space for the handle of the old account to migrate from.
I then logged in to the old account on botsin.space and hit the "Move to a different account" page, entering @jmason_links@mstdn.social for the handle to migrate to. This triggered copying of the followers from one account to the other, and left the old account dormant with a link to the new location instead.
(One thing to watch out for is that once the move is triggered, the profile for the old account becomes read-only; I’ve since had to temporarily undo the "moved" status in order to update the profile text, which was a bit messy.)
Publish from Pinboard to BlueSky
This reads the same Pinboard RSS feed as the Mastodon gateway, and gateways new posts from there to the “jmason.ie” account at BlueSky. This is slightly more involved than the Mastodon script, as it attempts to generate an embed card and mark up any links in the post appropriately: Github repo
I have a cron on my home server which runs those Mastodon and BlueSky gateway scripts every 15 minutes, and that seems to be a reasonable cadence without hammering the various APIs too much.
-
This, via Reddit, is an amazing guide to buying a used electric vehicle, from Croatia’s EVClinic, who are a “car reverse engineering and specialty repair outfit. Taking cars apart, figuring out how and when they break, and figuring out how to repair them is their bread and butter. They’ve gained a reputation across Europe for being able to fix problems that even the manufacturers themselves don’t know how to deal with. They’ve now distilled that working experience into a report, detailing which vehicles are reliable in the long term – and which ones should be avoided. Each model also has a list of which parts are most likely to break, after how much mileage they are likely to break, and how much it costs to repair.”:
Based on our experience and that of our colleagues’ labs at 15-20 different locations worldwide, we have concluded that the battery is the last concern on the list during the first 10 years of an EV’s life, with some vehicles covering a large number of miles with the original battery system. The most common failures within 10 years of using an EV are: 1. Electric motors, 2. OBC chargers, 3. DC-DC/inverters, and only in fourth place, batteries. Some vehicles can go 10 years without any breakdowns or servicing, resulting in significant savings compared to fossil fuel vehicles. Even EVs that experience faults are cheaper to maintain than their fossil-fueled counterparts, even when factoring in battery and motor failures. Fossil fuel vehicles consume at least €0.13 per kilometer just in fuel, excluding services and breakdowns. With services, breakdowns, and maintenance, they consume an additional minimum of €0.08, totaling over €40,000 for 200,000 km. Thus, a faulty EV is still cheaper than a “functional” fossil fuel vehicle.
The article lists the Hybrid and Battery EVs available in Europe, and gives a rating to each one regarding their reliability and repairability, in extreme detail. Unfortunately, the BEV I drive — the Nissan Leaf — gets a terrible review due to what they consider really crappy battery technology choices. The perils of being an early adopter…. :((tags: nissan leaf bevs evs driving cars hybrid-vehicles electric-vehicles used-cars repair)
How to Learn: Userland Disk I/O
This is an interesting hodge-podge of key bits of information about disk I/O, file integrity and durability, buffering or unbuffered writes, async I/O, and which filesystems to use for high-I/O database operation on Linux, MacOS and Windows. One thing that was new to me: “You can periodically scrape /proc/diskstats to self-report on disk metrics”.
(tags: databases filesystems linux macos fsync durability coding)
-
an embedded storage engine built as a log-structured merge-tree. Unlike traditional LSM-tree storage engines, SlateDB writes all data to object storage [ie. S3, Azure Blob Storage, GCS]. Object storage is an amazing technology. It provides highly-durable, highly-scalable, highly-available storage at a great cost. And recent advancements have made it even more attractive: Google Cloud Storage supports multi-region and dual-region buckets for high availability. All object stores support compare-and-swap (CAS) operations. Amazon Web Service’s S3 Express One Zone has single-digit millisecond latency. We believe that the future of object storage are multi-region, low latency buckets that support atomic CAS operations. Inspired by The Cloud Storage Triad: Latency, Cost, Durability, we set out to build a storage engine built for the cloud. SlateDB is that storage engine.
This looks superb. Chris Riccomini is involved.
-
This looks great!
The first low-threshold funding program for independent developers and small teams creating innovative open-source software. We provide the tech-savvy civil society with access to the resources and processes needed for developing user-centered, innovative software projects. Since 2016, we have funded almost 400 projects. As a learning funding program, we have repeatedly made adjustments to become more efficient and effective. Now we are taking the next step and implement some significant changes. From now on, we are focusing on funding data security and software infrastructure. Apply with your ideas for innovative open source software in the public interest! You will receive up to €95,000 over six months or €158,000 over ten months of funding from the German Ministry of Education and Research. We will also provide you with coaching, consulting and networking opportunities.
(tags: funding open-source oss via:janl)
GOV.UK chatbot halted by hallucinations
“AI firms must address hallucinations before GOV.UK chatbot can roll out, digital chief claims”:
Trials of a generative AI-powered chatbot for GOV.UK users have found ongoing issues with so-called hallucinations that must be addressed before the technology can be widely deployed, according to one of the government’s digital leaders. [….] Speaking at an event this morning, Paul Willmott said: “We have experimented with a generative advice [tool] on GOV.UK. You will just say ‘I’m trying to do this’, or ‘I’m annoyed about this’… The challenge we are having – which is exactly the same as in the commercial sector – is what to do with the 1% of hallucinations where the agent starts to get challenging, or abusive – or even seductive.” Even if only present in a tiny minority of instances, these issues mean that GOV.UK Chat is not yet ready for widespread deployment, according to Willmott. Addressing hallucinations will require the support of the likes of OpenAI and other creates of large language models. “Until we have managed to iron that out – which will require the support of the foundational model creators – we won’t be able to put this live,” he said.
This is hardly surprising, but it’s good to see it being acknowledged and the brakes being applied.(tags: ai llms hallucations confabulation gov.uk chatbots chatgpt uk)
How the New sqlite3_rsync Utility Works
“I’ve enjoyed following the development of the new sqlite3_rsync utility in the SQLite project. The utility employs a bandwidth-efficient algorithm to synchronize new and modified pages from an origin SQLite database to a replica. You can learn more about the new utility here and try it out by following the instructions here. Curious about its workings, I reviewed the code” Interesting use of a truncated SHA-3 as the hash() implementation, for speed.
(tags: sqlite hashing rsync synchronization replication databases storage algorithms)
Using BlueSky as a Mastodon Bot
“A Cheap and Lazy way to create Mastodon Bots using… BlueSky?!” By using the brid.gy gateway service, it’s pretty trivial to use BlueSky as an easy means to make a mastodon bot without having to find a bot-friendly Masto host now that botsin.space is no more. For now, I’m doing this at @jmason.ie@bsky.brid.gy , which is gatewaying the posts from my BlueSky bot at https://bsky.app/profile/jmason.ie — although a more long term approach will be to host the links-to-Mastodon gateway “natively” instead of using brid.gy, IMO.
(tags: mastodon rss gateways social-media bluesky brid.gy bots linkblog)