Skip to content

Author: dailylinks

Links for 2023-06-02

  • Kaspersky reports new targeted malware on iOS

    They are dubbing it "Triangulation":

    We believe that the main reason for this incident is the proprietary nature of iOS. This operating system is a “black box” in which spyware like Triangulation can hide for years. Detecting and analyzing such threats is made more difficult by Apple’s monopoly of research tools, making it the perfect haven for spyware. In other words, as I have said more than once, users are given the illusion of security associated with the complete opacity of the system. What actually happens in iOS is unknown to the cybersecurity experts.

    (tags: ios malware infosec security kaspersky triangulation)

Links for 2023-06-01

  • Chemical found in widely used sweetener breaks up DNA

    Sucralose, as used in Splenda, is genotoxic. big yikes

    (tags: genotoxic sucralose sweeteners additives soft-drinks junk-food food health)

  • "Data protection IS AI regulation"

    The FTC have proposed a judgement against Amazon/Ring: "FTC says Ring employees illegally surveilled customers, failed to stop hackers from taking control of users' cameras. Under proposed order, Ring will be prohibited from profiting from unlawfully collected consumer videos, pay $5.8M in consumer refunds." Meredith Whittaker on Twitter, responding: "Speaking of real AI regulation grounded in reality! The part about Amazon being "prohibited from profiting from unlawfully collected consumer videos" is huge. Data protection IS AI regulation. & in this case will likely mean undoing datasets, retraining/disposing of models, etc." Retraining/discarding datasets is a HUGE deal for AI/ML companies. This is the big stick for regulators. I hope the EU DPCs are paying attention to this judgement.

    (tags: regulation ai ml training data-protection privacy ring amazon ftc)

Links for 2023-05-31

  • Kapsalon

    New fast food frankenstein dish just dropped:

    a fast food dish created in 2003 in the Dutch city of Rotterdam, consisting of a layer of french fries placed into a disposable metal take-away tray, topped with döner or gyro meat, covered with slices of Gouda cheese, and heated in an oven until the cheese melts. Then a layer of shredded iceberg lettuce is added, dressed with garlic sauce and sambal, a hot sauce from Indonesia .. The term kapsalon is Dutch for "hairdressing salon" or barber shop, alluding to one of the inventors of the dish who worked as a hairdresser.
    This sounds delicious.

    (tags: kapsalon fast-food dutch holland rotterdam)

Links for 2023-05-24

  • Mel's Loop

    "The Story of Mel" is a legendary USENET story of "Mel", a Real Programmer from back in the day, performing a truly impressive piece of optimization; a "paean to seat-of-the-pants machine coding", as Micheal puts it. This site is a little shrine to Mel's life and history from a MeFi user. (Via Meehawl)

    (tags: mefi hacks mel usenet history computing-history via:meehawl machine-code)

  • Why the United States should prioritize autonomous demining technology

    Excellent "AI for good" idea from the Bulletin of the Atomic Scientists:

    Investments in and development of technologies for autonomous demining operations, post war, are long overdue and consistent with the White House’s push for a Blueprint for an AI Bill of Rights, which vows to use autonomy for the public good. Alas, while the Defense Department has pursued autonomous systems for the battlefield and the unincentivized private sector has focused on producing dancing robotic dogs, efforts to develop autonomous demining technology have stagnated. The United States should provide funding to energize those efforts, regardless of what decision is made in regard to sending cluster bombs to Kiev.

    (tags: demining ai future warfare mines tech)

Links for 2023-05-23

  • AI Hiring and Ghost Jobs Are Making the Job Search, Labor Market Weird

    The AI enshittification continues:

    Job seekers may virtually interview with or be prescreened by an artificial-intelligence program such as HireVue, Harver, or Plum. After someone applies to a job at a company that uses this software, they may receive an automated survey asking them to answer inane personality-assessment questions like "Which statement describes you best? (a) I love debating academic theories or (b) I adopt a future emphasis." [...] And these AI-moderated processes might not be fair, either. Researchers at the University of California, Berkeley, say that AI decision-making systems could have a 44% chance of being embedded with gender bias, a 26% chance of displaying both gender and race bias, and may also be prone to screening out applicants with disabilities. In one notorious case, an audit of an AI screening tool found that it prioritized candidates who played high-school lacrosse or were named "Jared."

    (tags: jared ai enshittification future jobs work hirevue harver plum ghost-jobs hiring)

  • Erasure Coding versus Tail Latency - Marc's Blog

    A very neat trick via Marc Brooker to improve tail latencies using erasure coding: 'Say I have an in-memory cache of objects. I can keep any object in the cache once, and always go looking for it in that one place (e.g. with consistent hashing). If that place is slow, overloaded, experiencing packet loss, or whatever, I'll see high latency for all attempts to get that object. With hedging I can avoid that, if I store the object in two places rather than one, at the cost of doubling the size of my cache. But what if I wanted to avoid the slowness and not double the size of my cache? Instead of storing everything twice, I could break it into (for example) 5 pieces .. encoded in such a way that I could reassemble it from any four pieces .. . Then, when I fetch, I send five get requests, and have the whole object as soon as four have returned. The overhead here on requests is 5x, on bandwidth is worst-case 20%, and on storage is 20%. The effect on tail latency can be considerable.'

    (tags: architecture cache storage tail-latencies performance marc-brooker lambda erasure-coding algorithms latency)

  • Container Loading in AWS Lambda

    Some lovely details in this writeup of a new system in AWS Lambda, via Marc Brooker:

    This system gets performance by doing as little work as possible (deduplication, caching, lazy loading), and then gets resilience by doing slightly more work than needed (erasure coding, salted deduplication, etc). This is a tension worth paying attention to in all system designs.

    (tags: architecture aws lambda marc-brooker performance storage caching containers caches)

Links for 2023-05-22

  • Paper recommending continuing COVID-19 vaccination for kids

    tl;dr: vaccination of kids is worth it to protect against Long Covid and hospitalisation. "A Methodological Framework for Assessing the Benefit of SARS-CoV-2 Vaccination following Previous Infection: Case Study of Five- to Eleven-Year-Olds", Christina Pagel et al.:

    We present a novel methodological framework for estimating the potential benefits of COVID-19 vaccination in previously infected children aged five to eleven, accounting for waning. We apply this framework to the UK context and for two adverse outcomes: hospitalisation related to SARS-CoV-2 infection and Long Covid. We show that the most important drivers of benefit are: the degree of protection provided by previous infection; the protection provided by vaccination; the time since previous infection; and future attack rates. Vaccination can be very beneficial for previously infected children if future attack rates are high and several months have elapsed since the previous major wave in this group. Benefits are generally larger for Long Covid than hospitalisation, because Long Covid is both more common than hospitalisation and previous infection offers less protection against it. Our framework provides a structure for policy makers to explore the additional benefit of vaccination across a range of adverse outcomes and different parameter assumptions. It can be easily updated as new evidence emerges.

    (tags: vaccines vaccination covid-19 sars-cov-2 modelling long-covid uk)

  • EU hits Meta with record €1.2B privacy fine

    The EDPB finally had to step in and override the pet regulator, our DPC. Here's the big problem though:

    Meta also has until November 12 to delete or move back to the EU the personal data of European Facebook users transferred and stored in the U.S. since 2020 and until a new EU-U.S. deal is reached.
    This is going to be technically infeasible given Meta's architecture, so the next question is, what happens when they fail to do it...

    (tags: meta facebook dpc edpb data-protection data-privacy eu us fines)

  • Dropbox testing HTTP3

    "dark testing", live in production, to a separate test domain. Great way to gather some real-world data. Latencies are appreciably better, particularly for low-quality connections

    (tags: dropbox http3 http2 http protocols udp networking ip testing)

Links for 2023-05-19

  • My students are using AI to cheat. Here’s why it’s a teachable moment

    One of the reasons so many people suddenly care about artificial intelligence is that we love panicking about things we don’t understand. Misunderstanding allows us to project spectacular dangers on to the future. Many of the very people responsible for developing these models (who have enriched themselves) warn us about artificial intelligence systems achieving some sort of sentience and taking control of important areas of life. Others warn of massive job displacement from these systems. All of these predictions assume that the commercial deployment of artificial intelligence actually would work as designed. Fortunately, most things don’t. That does not mean we should ignore present and serious dangers of poorly designed and deployed systems. For years predictive modeling has distorted police work and sentencing procedures in American criminal justice, surveilling and punishing Black people disproportionately. Machine learning systems are at work in insurance and health care, mostly without transparency, accountability, oversight or regulation. We are committing two grave errors at the same time. We are hiding from and eluding artificial intelligence because it seems too mysterious and complicated, rendering the current, harmful uses of it invisible and undiscussed. And we are fretting about future worst-case scenarios that resemble the movie The Matrix more than any world we would actually create for ourselves. Both of these habits allow the companies that irresponsibly deploy these systems to exploit us. We can do better. I will do my part by teaching better in the future, but not by ignoring these systems and their presence in our lives.

    (tags: ai future education teaching society)

Links for 2023-05-18

Links for 2023-05-16

Links for 2023-05-15

  • Kafka vs Redpanda Performance

    I don't use either service, but this is actually an excellent writeup of some high-end performance optimization on modern Linux EC2-based systems with NVMe SSDs, and the benchmarking of same

    (tags: kafka redpanda benchmarks ec2 aws ssd optimization performance ops)

  • Never Give Artificial Intelligence the Nuclear Codes

    Something new to worry about -- giving an AI the keys to the nukes:

    Any country that inserts AI into its [nuclear] command and control will motivate others to follow suit, if only to maintain a credible deterrent. Michael Klare, a peace-and-world-security-studies professor at Hampshire College, has warned that if multiple countries automate launch decisions, there could be a “flash war” analogous to a Wall Street “flash crash.” Imagine that an American AI misinterprets acoustic surveillance of submarines in the South China Sea as movements presaging a nuclear attack. Its counterstrike preparations would be noticed by China’s own AI, which would actually begin to ready its launch platforms, setting off a series of escalations that would culminate in a major nuclear exchange.

    (tags: ai command-and-control nuclear-war nuclear flash-war)

Links for 2023-05-11

  • In defence of swap

    Common misconceptions about swap memory on Linux systems:

    Swap is a useful tool to allow equality of reclamation of memory pages, but its purpose is frequently misunderstood, leading to its negative perception across the industry. If you use swap in the spirit intended, though – as a method of increasing equality of reclamation – you'll find that it's a useful tool instead of a hindrance. Disabling swap does not prevent disk I/O from becoming a problem under memory contention, it simply shifts the disk I/O thrashing from anonymous pages to file pages. Not only may this be less efficient, as we have a smaller pool of pages to select from for reclaim, but it may also contribute to getting into this high contention state in the first place.
    (via valen)

    (tags: linux memory performance swap vm oom)

  • Solar Quote Analyser

    handy web tool to figure out if a quote for a domestic solar PV install in Ireland is cheap, on the money, or too pricey

    (tags: quotes solar-pv solar home money finance)

  • Coinbase spent $65M on Datadog

    in one year -- Sixty. Five. Million. Dollars.

    (tags: datadog saas coinbase fail lol money)

Links for 2023-05-08

Links for 2023-05-05

  • Will A.I. Become the New McKinsey?

    Great stuff from Ted Chiang:

    A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place. The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey?

    (tags: ai capitalism mckinsey future politics ted-chiang)

Links for 2023-05-03

  • The Wide Angle: Understanding TESCREAL — Silicon Valley’s Rightward Turn

    As you encounter these ideologies [Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism] in the wild, you might use the TESCREAL lens, and its alignment with Eurasianism and Putin’s agenda, to evaluate them, and ask whether they tend to undermine or enhance the project of liberal democracy. TESCREAL ideologies tend to advance an illiberal agenda and authoritarian tendencies, and it’s worth turning a very critical eye towards them, especially in cases where that’s demonstrably true. Clearly there are countless well-meaning people trying to use technology and reason to improve the world, but that should never come at the expense of democratic, inclusive, fair, patient, and just governance. The biggest risk AI poses right now is that alarmists will use the fears surrounding it as a cudgel to enact sweeping policy reforms. We should resist those efforts. Now more than ever, we should be guided by expertise, facts, and evidence as we seek to use technology in ways that benefit everyone.

    (tags: ideology future tescreal ea longtermism ai politics silicon-valley)

  • heightened risk of autoimmune diseases after Covid

    More evidence of a "substantially increased risk of developing a diverse spectrum of new-onset autoimmune diseases":

    Previously we knew there were many features of autoimmunity engendered by Covid, but the link to manifesting important autoimmune diseases has not been established. There are still many dots not connected—it’s fuzzy. We need to better understand how the dysregulation of our immune system that can occur from a Covid infection (or even more rarely from a vaccine) can be linked with a serious autoimmune condition. While we’ve fully recognized that people with autoimmune diseases are more vulnerable to Covid and adverse outcomes, the flip of that — that Covid can make some people vulnerable to autoimmune diseases — is what’s new.
    (from the always excellent Eric Topol.)

    (tags: covid-19 long-covid pasc autoimmune diseases health medicine research eric-topol)

Links for 2023-05-02

  • In a small study, an AI 'brain decoder' inches toward reading minds

    In a new Nature Neuroscience paper published Monday, Huth and a team of researchers from the University of Texas at Austin introduced a new “brain decoder” enabled by GPT-1, an earlier version of the artificial neural network technology that underpins ChatGPT. After digesting several hours of training data, the new tool was able to describe the gist of stories the three participants in the proof-of-concept experiment listened to — just by looking at their functional MRI scans.
    Very cool stuff. And I am happy to see the ethical considerations have been considered:
    “It is important to constantly evaluate what the implications are of new brain decoders for mental privacy,” said Jerry Tang, a Ph.D. candidate in Huth’s lab and lead author on the paper, in a press briefing. In devising ways to protect privacy, the authors asked participants to try to prevent the decoder from reconstructing the words they were hearing several different ways. Particularly effective methods included mentally listing off animals, and telling a different story at the same time the podcast was playing were particularly effective at stopping the decoder, said Tang. The authors also found that the decoder had to be trained on each subject’s data and wasn’t effective when used on another person. Between these findings and the fact that any movement would make the fMRI scans worse, the authors concluded that it’s not currently possible for a brain decoder to be used on someone against their will.

    (tags: fmri scanning brain mri mindreading gpt podcasts)

Links for 2023-04-28

  • Inside LAION

    "A High School Teacher’s Free Image Database Powers AI Unicorns":

    To build LAION, founders scraped visual data from companies such as Pinterest, Shopify and Amazon Web Services — which did not comment on whether LAION’s use of their content violates their terms of service — as well as YouTube thumbnails, images from portfolio platforms like DeviantArt and EyeEm, photos from government websites including the US Department of Defense, and content from news sites such as The Daily Mail and The Sun. If you ask Schuhmann, he says that anything freely available online is fair game. But there is currently no AI regulation in the European Union, and the forthcoming AI Act, whose language will be finalized early this summer, will not rule on whether copyrighted materials can be included in big data sets. Rather, lawmakers are discussing whether to include a provision requiring the companies behind AI generators to disclose what materials went into the data sets their products were trained on, thus giving the creators of those materials the option of taking action. [...] “It has become a tradition within the field to just assume you don’t need consent or you don’t need to inform people, or they don’t even have to be aware of it. There is a sense of entitlement that whatever is on the web, you can just crawl it and put it in a data set,” said Abeba Birhane, a Senior Fellow in Trustworthy AI at Mozilla Foundation.

    (tags: consent opt-in web ai ml laion training-data scraping)

  • Ask HN: Most interesting tech you built for just yourself?

    Fantastic thread of hackers scratching their own itch (via SimonW)

    (tags: via:simonw hacking projects hn hacks open-source)

  • informative Twitter thread on the LessWrong/rationalist/"AI risk"/effective altruism cult

    "some people understand immediately when i try to explain what it was like to be fully in the grip of the yudkowskian AI risk stuff and some people it doesn't seem to land at all, which is probably good for them and i wish i had been so lucky". Bananas...

    (tags: cults ai-risk yudkowski future rokos-basilisk lesswrong effective-altruism)

Links for 2023-04-27

  • Introducing VirusTotal Code Insight: Empowering threat analysis with generative AI

    Impressively, when these models are trained on programming languages, they can adeptly transform code into natural language explanations. [...] Code Insight is a new feature based on Sec-PaLM, one of the generative AI models hosted on Google Cloud AI. What sets this functionality apart is its ability to generate natural language summaries from the point of view of an AI collaborator specialized in cybersecurity and malware. This provides security professionals and analysts with a powerful tool to figure out what the code is up to.  At present, this new functionality is deployed to analyze a subset of PowerShell files uploaded to VirusTotal. The system excludes files that are highly similar to those previously processed, as well as files that are excessively large. This approach allows for the efficient use of analysis resources, ensuring that only the most relevant files (such as PS1 files) are subjected to scrutiny. In the coming days, additional file formats will be added to the list of supported files, broadening the scope of this functionality even further.
    (via Julie on ITC Slack)

    (tags: virustotal analysis malware code reverse-engineering infosec security)

  • How Philly Cheesesteaks Became a Big Deal in Lahore, Pakistan

    This is fascinating history:

    An establishment with a legacy such as [The Lahore Gymkhana Club, founded in 1878 under British rule] needed to continue revamping itself and serve exclusive dishes for its high-end clientele. And the club, along with restaurants aspiring to serve continental food, was bolstered by a growing taste for a new ingredient in town: processed cheese. “Sandwiches gradually started becoming popular in the 1980s because of the [wider] availability of cheese and mushrooms,” says Chaudhry. Until the 1980s, processed cheese was largely imported, and its use was limited to the rich, who would frequent establishments such as the Gymkhana. As Lahori taste buds adapted to and appreciated cheese, production was initiated locally. Demand for cheeseburgers and sandwiches skyrocketed in the 1990s, with a growing number of Pakistanis who’d traveled to the U.S. aspiring to re-create offerings from various popular American chains. One of these is exceptionally familiar. Even today, online food groups in Pakistan are peppered with people asking the community where they can find a cheese­steak in Lahore “like the one at Pat’s.” Many of them post images of the cheese­steaks from the original shop at 9th and Passyunk.

    (tags: food cheesesteaks philadelphia history pakistan lahore sandwiches)

Links for 2023-04-26

  • "Nothing like this will be built again"

    Charlie Stross visits the Advanced Gas-cooled Reactors at Torness nuclear power station:

    The AGRs at Torness [in the UK] are not ordinary civil [nuclear] power reactors. Designed in the 1970's, they were the UK's bid to build an export-earning civil nuclear power system. They're sensitive thoroughbreds, able to reach a peak conversion efficiency of 43% -- that is, able to turn up to 43% of their energy output into electricity. By comparison, a PWR peaks at 31-32%. However, the PWRs have won the race for commercial success: they're much, much, simpler. AGRs are like Concorde -- technological marvels, extremely sophisticated and efficient, and just too damned expensive and complex for their own good. (You want complexity? Torness was opened in 1989. For many years thereafter, its roughly fifty thousand kilometres of aluminium plumbing made it the most complex and demanding piece of pipework in Europe. You want size? The multi-thousand ton reactor core of an AGR is bigger than the entire plant at some PWR installations.) It's a weird experience, crawling over the guts of one of the marvels of the atomic age, smelling the thing (mostly machine oil and steam, and a hint of ozone near the transformers), all the while knowing that although it's one of the safest and most energy-efficient civilian power reactors ever built it's a a technological dead-end, that there won't be any more of them, and that when it shuts down in thirty or forty years' time this colossal collision between space age physics and victorian plumbing will be relegated to a footnote in the history books. "Energy too cheap to meter" it ain't, but as a symbol of what we can achieve through engineering it's hard to beat.

    (tags: engineering nuclear-power agr history uk torness power plumbing)

  • The Toronto Recursive History Project

    "This plaque was commemorated on October 10, 2018, commemorate its own commemoration. Plaques like this one are an integral part of the campaign to support more plaques like this one. By reading this plaque, you have made a valuable addition to the number of people who have read this plaque. To this day and up to the end of this sentence, this plaque continues to be read by people like yourself. Heritage Toronto 2018"

    (tags: heritage toronto recursive plaque commemoration funny)

  • Palantir Demos AI to Fight Wars But Says It Will Be Totally Ethical Don’t Worry About It

    This is a really atrocious idea:

    Palantir also isn’t selling a military-specific AI or large language model (LLM) here, it’s offering to integrate existing systems into a controlled environment. The AIP demo shows the software supporting different open-source LLMs, including  FLAN-T5 XL, a fine-tuned version of GPT-NeoX-20B, and Dolly-v2-12b, as well as several custom plug-ins. Even fine-tuned AI systems off the shelf have plenty of known issues that could make asking them what to do in a warzone a nightmare. For example, they’re prone to simply making things up, or “hallucinating.” GPT-NeoX-20B in particular is an open-source alternative to GPT-3, a previous version of OpenAI’s language model, created by a startup called EleutherAI. One of EleutherAI’s open-source models -- fine-tuned by another startup called Chai -- recently convinced a Belgian man who spoke to it for six weeks to kill himself.  What Palantir is offering is the illusion of safety and control for the Pentagon as it begins to adopt AI. [...] What AIP does not do is walk through how it plans to deal with the various pernicious problems of LLMs and what the consequences might be in a military context. AIP does not appear to offer solutions to those problems beyond “frameworks” and “guardrails” it promises will make the use of military AI “ethical” and “legal.”

    (tags: palantir grim-meathook-future war llm aip military ai ethics)

Links for 2023-04-25

  • Silence Isn't Consent

    More on yesterday's img2dataset failure to support opt-in:

    It isn't "effective altruism" if you have to force people to comply with you.

    (tags: img2dataset ai scraping web consent opt-in)

  • Google Launched Bard Despite Major Ethical Concerns From Its Employees

    "The staffers who are responsible for the safety and ethical implications of new products have been told not to get in the way or to try to kill any of the generative AI tools in development," employees told Bloomberg. The ethics team is now "disempowered and demoralized," according to former and current staffers. Before OpenAI launched ChatGPT in November 2022, Google's approach to AI was more cautious and less consumer-facing, often working in the background of tools like Search and Maps. But since ChatGPT's enormous popularity prompted a "code red" from executives, Google's threshold for safe product releases has been lowered in an effort to keep up with its AI competitors.

    (tags: google ai safety chatgpt bard corporate-responsibility)

Links for 2023-04-24

  • Shitty behaviour around the img2dataset AI scraper

    The author of this popular AI training data scraping tool doesn't seem to understand consent and opt-in:

    Letting a small minority [ie web publishers] prevent the large majority [AI users] from sharing their images and from having the benefit of last gen AI tool would definitely be unethical yes. Consent is obviously not unethical. You can give your consent for anything if you wish. It seems you're trying to decide for million of other people without asking them for their consent.
    In other words, "scraping your content without opt-in is better than denying access to your content for millions of potential future AI users". An issue to implement robots.txt support has been languishing since 2021. Good arguments for blocking the img2dataset user agent in general...

    (tags: opt-in consent ai ml bad-behaviour scraping robots)

  • Why is British media so transphobic?

    Aside from the weirdness of Mumsnet, I didn't know about the influence of the mid-2000s skeptics movement:

    While claiming to be the country’s foremost critical thinkers, the group was riddled with anti-humanities bias and a fetish for a certain kind of “science” that it held to reveal a set of immutable principles upon which the world was built with almost no regard whatsoever for interpretative analysis based on social or historical factors. Part of this mode of thinking was an especially reductivist biologism: the idea that there are immutable realities to be found in our DNA, and if we just paid enough attention to Science and stopped trying to split hairs and discover meaning over in the superfluous disciplines of the humanities, then everything would be much simpler. It’s precisely this kind of biological essentialism — which skirts dangerously close to eugenics — that leads people to think they can “debunk” a person’s claim to their gender identity, or that it should be subjected to rigorous testing by someone in a lab coat before we can believe the subject is who they say they are.

    (tags: debunking scepticism skeptics history terfs uk uk-politics gender)

Links for 2023-04-20

  • Long COVID Is Being Erased -- Again - The Atlantic

    Ed Yong is back writing again!

    Most Americans simply aren’t thinking about COVID with the same acuity they once did; the White House long ago zeroed in on hospitalizations and deaths as the measures to worry most about. And what was once outright denial of long COVID’s existence has morphed into something subtler: a creeping conviction, seeded by academics and journalists and now common on social media, that long COVID is less common and severe than it has been portrayed—a tragedy for a small group of very sick people, but not a cause for societal concern. This line of thinking points to the absence of disability claims, the inconsistency of biochemical signatures, and the relatively small proportion of severe cases as evidence that long COVID has been overblown. “There’s a shift from ‘Is it real?’ to ‘It is real, but …,’” Lekshmi Santhosh, the medical director of a long-COVID clinic at UC San Francisco, told me. Yet long COVID is a substantial and ongoing crisis—one that affects millions of people. However inconvenient that fact might be to the current “mission accomplished” rhetoric, the accumulated evidence, alongside the experience of long haulers, makes it clear that the coronavirus is still exacting a heavy societal toll.

    (tags: long-covid ed-yong covid-19 health medicine society healthcare)

  • OpenAI’s hunger for data is coming back to bite it

    Spot on:

    The company could have saved itself a giant headache by building in robust data record-keeping from the start, she says. Instead, it is common in the AI industry to build data sets for AI models by scraping the web indiscriminately and then outsourcing the work of removing duplicates or irrelevant data points, filtering unwanted things, and fixing typos. These methods, and the sheer size of the data set, mean tech companies tend to have a very limited understanding of what has gone into training their models. 

    (tags: training data provenance ai ml common-crawl openai chatgpt data-protection privacy)

  • Holly Herndon on AI music

    she really gets it. Lots of interesting thoughts

    (tags: holly-herndon ai music ml future tech sampling spawning)

Links for 2023-04-17

Links for 2023-04-14

  • "Why Banker Bob (still) Can’t Get TLS Right: A Security Analysis of TLS in Leading UK Banking Apps"

    Jaysus this is a litany of failure.

    Abstract. This paper presents a security review of the mobile apps provided by the UK’s leading banks; we focus on the connections the apps make, and the way in which TLS is used. We apply existing TLS testing methods to the apps which only find errors in legacy apps. We then go on to look at extensions of these methods and find five of the apps have serious vulnerabilities. In particular, we find an app that pins a TLS root CA certificate, but do not verify the hostname. In this case, the use of certificate pinning means that all existing test methods would miss detecting the hostname verification flaw. We also find one app that doesn’t check the certificate hostname, but bypasses proxy settings, resulting in failed detection by pentesting tools. We find that three apps load adverts over insecure connections, which could be exploited for in-app phishing attacks. Some of the apps used the users’ PIN as authentication, for which PCI guidelines require extra security, so these apps use an additional cryptographic protocol; we study the underlying protocol of one banking app in detail and show that it provides little additional protection, meaning that an active man-in-the-middle attacker can retrieve the user’s credentials, login to the bank and perform every operation the legitimate user could.
    See also: https://www.synopsys.com/blogs/software-security/ineffective-certificate-pinning-implementations/

    (tags: ssl tls certificates certificate-pinning security infosec banking apps uk pci mobile)

  • Using DuckDB to repartition parquet data in S3

    Wow, DuckDB is very impressive -- I had no idea it could handle SELECTs against Parquet data in S3:

    A common pattern to ingest streaming data and store it in S3 is to use Kinesis Data Firehose Delivery Streams, which can write the incoming stream data as batched parquet files to S3. You can use custom S3 prefixes with it when using Lambda processing functions, but by default, you can only partition the data by the timestamp (the timestamp the event reached the Kinesis Data Stream, not the event timestamp!). So, a few common use cases for data repartitioning could include: Repartitioning the written data for the real event timestamp if it's included in the incoming data; Repartitioning the data for other query patterns, e.g. to support query filter pushdown and optimize query speeds and costs; Aggregation of raw or preprocessed data, and storing them in an optimized manner to support analytical queries.

    (tags: duckdb repartitioning s3 parquet orc hive kinesis firehose)

  • Timnit Gebru's anti-'AI pause'

    Couldn't agree more with Timnit Gebru's comments here:

    What is your appeal to policymakers? What would you want Congress and regulators to do now to address the concerns you outline in the open letter? Congress needs to focus on regulating corporations and their practices, rather than playing into their hype of “powerful digital minds.” This, by design, ascribes agency to the products rather than the organizations building them. This language obfuscates the amount of data that is being collected — and the amount of worker exploitation involved with those who are labeling and supplying the datasets, and moderating model outputs. Congress needs to ensure corporations are not using people’s data without their consent, and hold them responsible for the synthetic media they produce — whether it is text or media spewing disinformation, hate speech or other types of harmful content. Regulations need to put the onus on corporations, rather than understaffed agencies. There are probably existing regulations these organizations are breaking. There are mundane “AI” systems being used daily; we just heard about another Black man being wrongfully arrested because of the use of automated facial analysis systems. But that’s not what we’re talking about, because of the hype.

    (tags: data privacy ai ml openai monopoly)

Links for 2023-04-13

  • caesarHQ/textSQL

    This is amazing -- using GPT-3.5 to convert a natural-language query into SQL applied to a specific dataset, in these examples, San Francisco city data and US public census data:

    With CensusGPT, you can ask any question related to census data in natural language. These natural language questions get converted to SQL using GPT-3.5 and are then used to query the census database. Here are some examples: - Five cities with a population over 100,000 and lowest crime - 10 highest income areas in california Here is a similar example from sfGPT: - Which four neighborhoods had the most crime in San Francisco in 2021?

    (tags: sfgpt censusgpt textsql natural-language gpt-3.5 sql querying search open-source)

Links for 2023-04-12

  • Exploring performance differences between Amazon Aurora and vanilla MySQL | Plaid

    This is a major difference between vanilla MySQL and Amazon Aurora (and a potentially major risk!):

    because Aurora MySQL primary and replica instances share a storage layer, they share a set of undo logs. This means that, for a REPEATABLE READ isolation level, the storage instance must maintain undo logs at least as far back as could be required to satisfy transactional guarantees for the primary or any read replica instance. Long-running replica transactions can negatively impact writer performance in Aurora MySQL—finally, an explanation for the incident that spawned this investigation. The same scenario plays out differently in vanilla MySQL because of its different model for undo logs. Vanilla MYSQL: there are two undo logs – one on the writer, and one on the reader. The performance impact of an operation that prevents the garbage collection of undo log records will be isolated to either the writer or the reader. Aurora MySQL: there is a single undo log that is shared between the writer and reader. The performance impact of an operation that prevents the garbage collection of undo log records will affect the entire cluster.

    (tags: aurora aws mysql performance databases isolation-levels)

Links for 2023-04-11

  • EV Database

    Comparison site for electric cars; actually has a realistic model of genuine range for each EV. Full details on charging connectors, charge curves (for charging speed), etc.

    (tags: ev driving cars vehicles)

  • The Black Magic of (Java) Method Dispatch

    Some fascinating details of low-level Java performance optimization, particularly with JIT applied to OO method dispatch:

    Programming languages like Java provide the facilities for subtyping/polymorphism as one of the ways to construct modular and reusable software. This language choice naturally comes at a price, since there is no hardware support for virtual calls, and therefore runtimes have to emulate this behavior. In many, many cases the performance of method dispatch is not important. Actually, in a vast majority of cases, the low-level performance concerns are not the real concerns. However, there are cases when method dispatch performance is important, and there you need to understand how dispatch works, what runtimes optimize for you, and what you can do to cheat and/or emulate similar behavior in your code. For example, in the course of String Compression work, we were faced with the problem of selecting the coder for a given String. The obvious and highly maintainable approach of creating a Coder interface, a few implementations, and dispatching the virtual calls over it, had met some performance problems on the very tiny benchmarks. Therefore, we needed to contemplate something better. After a few experiments, this post was born as a reference for others who might try to do the same. This post also tangentially touches the inlining of virtual calls, as the natural thing during the optimization.
    Discovered via this amazing commit: https://github.com/quarkusio/quarkus/commit/65dd4d43e2644db1c87726139280f9704140167c

    (tags: optimization performance java oo jit coding polymorphism)

Links for 2023-04-07

  • MariaDB.com is dead, long live MariaDB.org

    Oof. Looks like the commercial company behind MariaDB is going south quickly:

    Monty, the creator of MySQL and MariaDB founder, hasn’t been at a company meeting for over a year and a half. The relationship between Monty and the CEO, Michael Howard, is extremely rocky. At a company all-hands meeting Monty and Michael Howard were shouting at each other while up on stage in the auditorium in front of the entire staff. Monty made his position perfectly clear as he shouted his last words before he walked out: “You’re killing my fu&#@$! company!!!” Monty was subsequently voted off the board in July of 2022 solidifying the hostile takeover by Michael Howard. Buyer beware, Monty and his group of founders and database experts are no longer at the company.
    At least the open-source product is still trustworthy, though.

    (tags: databases storage mariadb software open-source companies)

Links for 2023-04-06

  • Google "raters" say they don't have enough time to verify correct answers from Bard

    Contractors say they have a set amount of time to complete each task, like review a prompt, and the time they're allotted for tasks can vary wildly — from as little as 60 seconds to more than several minutes. Still, raters said it's difficult to rate a response when they are not well-versed in a topic the chatbot is talking about, including technical topics like blockchain for example.  Because each assigned task represents billable time, some workers say they will complete the tasks even if they realize they cannot accurately assess the chatbot responses.  "Some people are going to say that's still 60 seconds of work, and I can't recoup this time having sat here and figured out I don't know enough about this, so I'm just going to give it my best guess so I can keep that pay and keep working," one rater said.

    (tags: google raters contractors fact-checking verification llms bard facts)

Links for 2023-04-03

Links for 2023-04-01

Links for 2023-03-31

  • A misleading open letter about sci-fi AI dangers ignores the real risks

    This essay is spot on about the recent AI open letter from the Future of Life Institute, asking for "a 6-month pause on training language models “more powerful than” GPT-4":

    Over 1,000 researchers, technologists, and public figures have already signed the letter. The letter raises alarm about many AI risks: "Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?" We agree that misinformation, impact on labor, and safety are three of the main risks of AI. Unfortunately, in each case, the letter presents a speculative, futuristic risk, ignoring the version of the problem that is already harming people. It distracts from the real issues and makes it harder to address them. The letter has a containment mindset analogous to nuclear risk, but that’s a poor fit for AI. It plays right into the hands of the companies it seeks to regulate.
    Couldn't agree more.

    (tags: ai scifi future risks gpt-4 regulation)

Links for 2023-03-30

  • AI and the American Smile. How AI misrepresents culture through a facial expression

    There are 18 images in the Reddit slideshow [a series of Midjourney-generated images of "selfies through history"] and they all feature the same recurring composition and facial expression. For some, this sequence of smiling faces elicits a sense of warmth and joyousness, comprising a visual narrative of some sort of shared humanity [...] But what immediately jumped out at me is that these AI-generated images were beaming a secret message hidden in plain sight. A steganographic deception within the pixels, perfectly legible to your brain yet without the conscious awareness that it’s being conned. Like other AI “hallucinations,” these algorithmic extrusions were telling a made up story with a straight face — or, as the story turns out, with a lying smile. [...] How we smile, when we smile, why we smile, and what it means is deeply culturally contextual.

    (tags: ai america culture photography midjourney smiling smiles context history)

  • Heat pump myths

    "Social media and newspapers are flooded with myths about heat pumps. Let's take them one by one in this post."

    (tags: myths mythbusting heat-pumps heating house home)

  • Belgian man dies by suicide following exchanges with chatbot

    Grim. This is the downside of LLM-based chatbots with ineffective guardrails against toxic output.

    "Without these conversations with the chatbot, my husband would still be here," the man's widow has said, according to La Libre. She and her late husband were both in their thirties, lived a comfortable life and had two young children. However, about two years ago, the first signs of trouble started to appear. The man became very eco-anxious and found refuge with ELIZA, the name given to a chatbot that uses GPT-J, an open-source artificial intelligence language model developed by EleutherAI. After six weeks of intensive exchanges, he took his own life.
    There's a transcript of the last conversation with the bot here: https://news.ycombinator.com/item?id=35344418 .

    (tags: bots chatbots ai gpt gpt-j grim future grim-meathook-future)

Links for 2023-03-28

Links for 2023-03-27

  • What Will Transformers Transform? – Rodney Brooks

    This is a great essay on GPT and LLMs:

    Roy Amara, who died on the last day of 2007, was the president of a Palo Alto based think tank, the Institute for the future, and is credited with saying what is now known as Amara’s Law: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." This has been a common problem with Artificial Intelligence, and indeed of all of computing. In particular, since I first became conscious of the possibility of Artificial Intelligence around 1963 (and as an eight year old proceeded to try to build my own physical and intelligent computers, and have been at it ever since), I have seen these overestimates many many times.
    and:
    I think that GPTs will give rise to a new aphorism (where the last word might vary over an array of synonymous variations): "If you are interacting with the output of a GPT system and didn’t explicitly decide to use a GPT then you’re the product being hoodwinked." I am not saying everything about GPTs is bad. I am saying that, especially given the explicit warnings from OpenAI, that you need to be aware that you are using an unreliable system. Using an unreliable system sounds awfully unreliable, but in August 2021 I had a revelation at TED in Monterey, California, when Chris Anderson (the TED Chris), was interviewing Greg Brockman, the Chairman of Open AI about an early version of GPT. He said that he regularly asked it questions about code he wanted to write and it very quickly gave him ideas for libraries to use, and that was enough to get him started on his project. GPT did not need to be fully accurate, just to get him into the right ballpark, much faster than without its help, and then he could take it from there. Chris Anderson (the 3D robotics one, not the TED one) has likewise opined (as have responders to some of my tweets about GPT) that using ChatGPT will get him the basic outline of a software stack, in a well tread area of capabilities, and he is many many times more productive than with out it. So there, where a smart person is in the loop, unreliable advice is better than no advice, and the advice comes much more explicitly than from carrying out a conventional search with a search engine. The opposite of useful can also occur, but again it pays to have a smart human in the loop. Here is a report from the editor of a science fiction magazine which pays contributors. He says that from late 2022 through February of 2023 the number of submissions to the magazine increased by almost two orders of magnitude, and he was able to determine that the vast majority of them were generated by chatbots. He was the person in the loop filtering out the signal he wanted, human written science fiction, from vast volumes of noise of GPT written science fiction. Why should he care? Because GPT is an auto-completer and so it is generating variations on well worked themes. But, but, but, I hear people screaming at me. With more work GPTs will be able to generate original stuff. Yes, but it will be some other sort of engine attached to them which produces that originality. No matter how big, and how many parameters, GPTs are not going to to do that themselves. When no person is in the loop to filter, tweak, or manage the flow of information GPTs will be completely bad. That will be good for people who want to manipulate others without having revealed that the vast amount of persuasive evidence they are seeing has all been made up by a GPT. It will be bad for the people being manipulated. And it will be bad if you try to connect a robot to GPT. GPTs have no understanding of the words they use, no way to connect those words, those symbols, to the real world. A robot needs to be connected to the real world and its commands need to be coherent with the real world. Classically it is known as the “symbol grounding problem”. GPT+robot is only ungrounded symbols. It would be like you hearing Klingon spoken, without any knowledge other than the Klingon sound stream (even in Star Trek you knew they had human form and it was easy to ground aspects of their world). A GPT telling a robot stuff will be just like the robot hearing Klingonese. My argument here is that GPTs might be useful, and well enough boxed, when there is an active person in the loop, but dangerous when the person in the loop doesn’t know they are supposed to be in the loop. [This will be the case for all young children.] Their intelligence, applied with strong intellect, is a key component of making any GPT be successful.

    (tags: gpts rodney-brooks ai ml amaras-law hype technology llms future)

  • Employees Are Feeding Sensitive Business Data to ChatGPT

    How unsurprising is this? And needless to say, a bunch of that is being reused for training:

    In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential information, client data, source code, or regulated information to the LLM.  In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company.

    (tags: chatgpt openai ip privacy data-protection security)

  • GitHub Copilot is open to remote prompt-injection attacks

    GitHub Copilot is also based on a large language model. What does indirect prompt injection do to it? Again, we demonstrate that, as long as an attacker controls part of the context window, the answer is: pretty much anything. Attackers only have to manipulate the documentation of a target package or function. As you reference and use them, this documentation is loaded into the context window based on complex and ever-changing heuristics. We show [...] how importing a synthetic library can lead Copilot to introduce subtle or not-so-subtle vulnerabilities into the code generated for you.

    (tags: injection copilot security exploits github llms chatgpt)

Links for 2023-03-24

  • Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow

    What we have here is an early sign we’re stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail. It’s a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.

    (tags: google ai ml microsoft openai chatgpt trust spam misinformation disinformation)

Links for 2023-03-23

  • Vatican flag SVG on Wikimedia Commons was incorrect for 5 years, and widely copied

    In 2017 a Wikimedia Commons user changed the inside of the tiara to red because that's how it appears on the Vatican Coat of Arms. But this assumption turned out to be faulty, because the official flag spec sheet uses different colors than the Coat of Arms. The mistake was quickly noticed by an anonymous IP who wrote an extensive and well-researched explanation of the error on the file's talk page. Unfortunately, nobody read it, and the mistake lived on for 5 years before another user noticed it and reverted the file.

    (tags: wikipedia wikimedia commons vatican flags oops)

  • ThumbHash

    "A very compact representation of an image placeholder. Store it inline with your data and show it while the real image is loading for a smoother loading experience."

    (tags: graphics images webdev compression lossy thumbnails)

Links for 2023-03-22

  • new LFP batteries will unlock cheaper electric vehicles

    Lithium ferrous phosphate (LFP) batteries, the type to be produced at the new [Ford] plant are a lower-cost alternative to the nickel- and cobalt-containing batteries used in most electric vehicles in the US and Europe today. While the technology has grown in popularity in China, Ford’s factory, developed in partnership with the Chinese battery giant CATL, marks a milestone in the West. By cutting costs while also boosting charging speed and extending lifetime, LFP batteries could help expand EV options for drivers. 

    (tags: lfp technology ev cars batteries renewable-energy)

  • You Broke Reddit: The Pi-Day Outage : RedditEng

    Quality post-mortem writeup of last week's Reddit outage. tl;dr: an in-place Kubernetes upgrade broke it. We use blue/green deployments -- with two separate parallel k8s clusters -- in order to avoid this risk, as k8s upgrades are very very risky in our experience; tiny "minor" changes often seem to cause breakage.

    (tags: k8s kubernetes outages reddit ops post-mortems)

  • Superb thread on effective AI regulation

    from Baldur Bjarnason:

    First, you clarify that for the purposes of Section 230 protection (or similar), whoever provides the AI as a service is responsible for its output as a publisher. If Bing Chat says something offensive then Microsoft would be as liable as if it were an employee; You'd set a law requiring tools that integrate generative AI to attach disclosures to the content. Gmail/Outlook should pop up a notice when you get an email that their AI generated. Word/Docs should have metadata fields and notices when you open files that have used built-in AI capabilities. AI chatbots have to disclose that they are bots. Copilot should add a machine-parsable code comment. You could always remove the metadata, but doing so would establish an intent to deceive; Finally, you'd mandate that all training data sets be made opt-in (or that all of its contents are released under a permissive license) and public. Heavy fines for non-disclosure. Heavy fines for violating opt-in. Even heavier fines for lying about your training data set. Make every AI model a "vegan" model. Remove every ethical and social concern about the provenance and rights regarding the training data.
    I think #3 in particular is the most important of all.

    (tags: ai regulation data-privacy training llm ethics)

  • Bing Chat is still vulnerable to hidden prompt injection attacks

    happily parses hidden text in webpages, acting on information there that isn't visible to human viewers. Related: https://twitter.com/matteosonoioo/status/1630941926454185992/photo/1 , where Matteo Contrini demonstrated an attack to turn it into a scammer with prompt injection.

    (tags: bing-chat bing chatgpt openai prompt-injection exploits attacks hidden-text)

Links for 2023-03-20

  • Pop Culture Pulsar: Origin Story of Joy Division's Unknown Pleasures Album Cover

    Great dig into the CP1919 pulsar signal plot that was used for "Unknown Pleasures":

    This plotting of sequences like this, it started just a little bit earlier when we were looking at potentially drifting subpulses within the major pulse itself. So, the thought was, well, is there something like this peak here, which on the next pulse moves over here, and then moves over here, and over there. Actually, would be moving this way in that case – either way. I think Frank Drake and I published a paper in Science Magazine on exactly that issue – suggesting there might be drifting subpulses within the major pulse, which would then get back to the physics of what was causing the emission in the first place. So, then the thought was, well let’s plot out a whole array of pulses, and see if we can see particular patterns in there. So that’s why, this one was the first I did – CP1919 – and you can pick out patterns in there if you really work at it. But I think the answer is, there weren’t any that were real obvious anyway. I don’t really recall, but my bet is that the first one of these that I did, I didn’t bother to block out the stuff, and I found that it was just too confusing. So then, I wrote the program so that I would block out when a hill here was high enough, then the stuff behind it would stay hidden. And it was pretty easy to do from a computer perspective.

    (tags: design joy-division music science physics pulsars astronomy cp1919 dataviz)

  • moyix/gpt-wpre: Whole-Program Reverse Engineering with GPT-3

    This is a little toy prototype of a tool that attempts to summarize a whole binary using GPT-3 (specifically the text-davinci-003 model), based on decompiled code provided by Ghidra. However, today's language models can only fit a small amount of text into their context window at once (4096 tokens for text-davinci-003, a couple hundred lines of code at most) -- most programs (and even some functions) are too big to fit all at once. GPT-WPRE attempts to work around this by recursively creating natural language summaries of a function's dependencies and then providing those as context for the function itself. It's pretty neat when it works! I have tested it on exactly one program, so YMMV.

    (tags: gpt-3 reverse-engineering ghidra decompilation reversing llm)

Links for 2023-03-16

Links for 2023-03-15

  • Cat6a FTP Tool-Less Keystone Module

    For future use -- CAT6A cable endpoints which don't require tricky crimping: "no crimp tool required at all, very much worth the extra cost, and they clip into the wall sockets or a patch panel ... you can do them with your fingers and a flush snips to get rid of the ends after you push the wires in" says Adam C on ITC Slack, at https://irishtechcommunity.slack.com/archives/C11BG27L2/p1678841261913069

    (tags: cat6a wiring home networking cables via:itc)

Links for 2023-03-14

  • Infra-Red, In Situ (IRIS) Inspection of Silicon

    Cool:

    This post introduces a technique I call “Infra-Red, In Situ” (IRIS) inspection. It is founded on two insights: first, that silicon is transparent to infra-red light; second, that a digital camera can be modified to “see” in infra-red, thus effectively “seeing through” silicon chips. We can use these insights to inspect an increasingly popular family of chip packages known as Wafer Level Chip Scale Packages (WLCSPs) by shining infrared light through the back side of the package and detecting reflections from the lowest layers of metal using a digital camera. This technique works even after the chip has been assembled into a finished product. However, the resolution of the imaging method is limited to micron-scale features.

    (tags: electronics hardware reversing bunnie-huang infrared x-ray-vision silicon)

Links for 2023-03-09

  • Seabirds are not at risk from offshore wind turbines

    At least according to this survey by Swedish power giant Vattenfall:

    The movements of herring gulls, gannets, kittiwakes, and great black-backed gulls were studied in detail from April to October, when bird activity is at its height. (This study only looked at four bird species, but Vattenfall says the model can and should be applied to more types of seabirds and to onshore wind farms as well.) The study’s findings: Not a single collision between a bird and a rotor blade was recorded.

    (tags: seabirds birds safety wind-turbines offshore-wind renewables wildlife)

  • Metformin, a new drug to prevent long covid

    'Over a thousand people with mild-to-moderate Covid were randomly assigned to 2 weeks of metformin (500 mg pills, 1 on day 1, twice a day for 4 days, then 500 mg in AM and 1000 mg in PM for 9 days) or placebo. There was a 42% reduction of subsequent Long Covid as you can see by the event curve below, which corresponds to an absolute decrease of 4.3%, from 10.6% reduced to 6.3%.' Still no use for _treating_ long COVID though.

    (tags: covid-19 long-covid metformin drugs papers)

Links for 2023-03-03

Links for 2023-03-02

  • ChatGPT for r/BuyItForLife

    This is actually really effective; the past 3 years of product recommendations from r/BuyItForLife, queryable using ChatGPT (via valen)

    (tags: via:valen ai recommendations search products reviews)

  • Hundreds of residents vent anger over 'entirely pointless' hydrogen heating trial

    Greenwashing grey hydrogen as a "renewable" means of keeping home gas heating alive is not going well in Whitby:

    Influential energy analyst Michael Liebreich and University of Cambridge mechanical engineering professor David Cebon drew attention to the now-37 independent studies showing that hydrogen boilers would require about five times more renewable energy than heat pumps — likely making them significantly more expensive to run. “This trial is entirely pointless in terms of proving whether hydrogen is the most cost-effective way of decarbonising homes,” Liebreich told the audience. “Every single systems analysis from every single expert who is not paid for by the gas industry or the heating industry has concluded that hydrogen plays little or no role. “The thing that it’s intended to do, though, is maintain the debate and discussion and the delay [of decarbonisation]. If you’re running a gas network organisation, as our next speaker [Cadent head of strategy, Angela Needle] does, what you really want is to continue to harvest profits off that. If you invest today in a gas distribution network, you get to charge 6% per year for 45 years on that investment and that’s until 2068.”

    (tags: hydrogen h2 grey-hydrogen greenwashing gas natural-gas heating homes decarbonisation)

Links for 2023-03-01

  • Nokia G22

    This is a decent product -- "Nokia has announced one of the first budget Android smartphones designed to be repaired at home allowing users to swap out the battery in under five minutes, in partnership with iFixit." I've been planning to buy a more repairable phone for my next iteration, so it's either this or a Fairphone.

    (tags: android hardware nokia phones right-to-repair repair ifixit)

  • copyright-respecting AI model training

    Alex J Champandard is thinking about how AI model training can be done in a copyright-respecting and legal fashion:

    With the criticism of web-scale datasets, it's legitimate to ask the question: "What models are trained with best-in-class Copyright practices?" Answer: StyleGAN and FFHQ github.com/NVlabs/ffhq-dataset 100% transparent dataset, clear copyright, opt-in licensing, model respects terms.

    (tags: copyright legal rights ip ai ml models training stylegan ffhq flickr)

  • The tech tycoon martyrdom charade

    Anil Dash:

    It's impossible to overstate the degree to which many big tech CEOs and venture capitalists are being radicalized by living within their own cultural and social bubble. Their level of paranoia and contrived self-victimization is off the charts, and is getting worse now that they increasingly only consume media that they have funded, created by their own acolytes. In a way, it's sort of like a "VC Qanon", and it colors almost everything that some of the most powerful people in the tech industry see and do — and not just in their companies or work, but in culture, politics and society overall. We're already seeing more and more irrational, extremist decision-making that can only be understood through this lens, because on its own their choices seem increasingly unfathomable.

    (tags: vc tech anil-dash radicalization politics us-politics)

Links for 2023-02-20

  • Better Thermostat

    Interesting smart home component for Home Assistant --

    This custom component will add crucial features to your climate-controlling TRV (Thermostatic Radiator Valves) to save you the work of creating automations to make it smart. It combines a room-temperature sensor, window/door sensors, weather forecasts, or an ambient temperature probe to decide when it should call for heat and automatically calibrate your TRVs to fix the imprecise measurements taken in the radiator’s vicinity.
    So basically if you have smart TRVs and a room temperature sensor, you can drive that as a pair.

    (tags: thermostat smart-home home-assistant heating trvs)

Links for 2023-02-16

Links for 2023-02-14

  • a COVID-aware activity tracker

    Interesting thought experiment regarding chronic disease, long COVID, ME/CFS etc: 'what might be in a convalescence mode, or a rest mode? And while I’m thinking of that, there’s a separate need, I think (hey! validate through research!) for, I don’t know, a chronic illness mode, because convalescence and rest are different things with different qualities distinct from the requirements and needs of people with long-term chronic illnesses. Some people who responded to my thinking-out-loud thread shared that you can use sleep tracking as a way to inform the spoons-for-the-day.'

    (tags: apple fitness accessibility convalescence chronic-disease activity-tracking long-covid me)

Links for 2023-02-13

  • A New Drug Switched Off My Appetite. What’s Left? | WIRED

    How long is it before there’s an injection for your appetites, your vices? Maybe they’re not as visible as mine. Would you self-administer a weekly anti-avarice shot? Can Big Pharma cure your sloth, lust, wrath, envy, pride? Is this how humanity fixes climate change—by injecting harmony, instead of hoping for it at Davos?

    (tags: mounjaro food eating weight calories future)

  • Silicon Valley tech companies are the real paperclip maximizers

    Another good Ted Chiang article --

    Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.” [...] This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies. Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

    (tags: superintelligence ted-chiang silicon-valley capitalism ai future civilization paperclip-maximisers)

Links for 2023-02-08

Links for 2023-02-02

Links for 2023-01-31

  • Study of 500,000 Medical Records Links Viruses to Alzheimer's Again And Again

    While not demonstrating a causal link, the correlations are pretty striking -- good argument for greatly increasing vaccination rates for many viral diseases.

    Around 80 percent of the viruses implicated in brain diseases were considered 'neurotrophic', which means they could cross the blood-brain barrier. "Strikingly, vaccines are currently available for some of these viruses, including influenza, shingles (varicella-zoster), and pneumonia," the researchers write. "Although vaccines do not prevent all cases of illness, they are known to dramatically reduce hospitalization rates. This evidence suggests that vaccination may mitigate some risk of developing neurodegenerative disease." The impact of viral infections on the brain persisted for up to 15 years in some cases. And there were no instances where exposure to viruses was protective.

    (tags: viruses health medicine vaccines vaccination alzheimers parkinsons diseases)

Links for 2023-01-30

Links for 2023-01-24

  • CNET's AI Journalist Appears to Have Committed Extensive Plagiarism

    CNET used an AI to generate automated content for their site, and are definitely in the "finding out" stage from the looks of things:

    All told, a pattern quickly emerges. Essentially, CNET's AI seems to approach a topic by examining similar articles that have already been published and ripping sentences out of them. As it goes, it makes adjustments — sometimes minor, sometimes major — to the original sentence's syntax, word choice, and structure. Sometimes it mashes two sentences together, or breaks one apart, or assembles chunks into new Frankensentences. Then it seems to repeat the process until it's cooked up an entire article. [...] The question of exactly how CNET's disastrous AI was trained may end up taking center stage as the drama continues to unfold. At a CNET company meeting late last week [...] the outlet's executive vice president of content and audience refused to tell staff — many of them acclaimed tech journalists who have written extensively about the rise of machine learning — what data had been used to train the AI. The legality of using data to train an AI without the consent of the people who created that data is currently being tested by several lawsuits against the makers of prominent image generators, and could become a flashpoint in the commercialization of the tech.

    (tags: ai cnet content seo spam llms plagiarism training-data)

  • omni-epd

    A Python module to abstract usage of several different types of EPD (electronic paper displays), including Inky and Waveshare hardware.

    (tags: epd inky waveshare e-paper displays hardware python linux)

  • pycasso

    "a picture frame to show you random AI art every day" -- nice little epd/pi hack

    (tags: diy photos projects hacks epd e-paper ai art dall-e)

  • EC2 instance network error metrics

    looks like Amazon are now exposing a bunch of error metrics for their EC2 instance network drivers in Linux

    (tags: metrics ec2 ops drivers networking bandwidth errors)

Links for 2023-01-23

  • The bivalent vaccine booster outperforms

    Solid data now up for the bivalent BA.5 SARS-CoV-2 vaccine, says Eric Topol: "we now have extensive data that is quite encouraging -- better and broader than expected -- that I’m going to briefly review here"

    (tags: sars-cov-2 covid-19 vaccines eric-topol medicine health)

  • Long COVID: major findings, mechanisms and recommendations

    Current state of research into Long COVID, courtesy of Nature Reviews Microbiology.

    Long COVID is an often debilitating illness that occurs in at least 10% of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections. More than 200 symptoms have been identified with impacts on multiple organ systems. At least 65 million individuals worldwide are estimated to have long COVID, with cases increasing daily. Biomedical research has made substantial progress in identifying various pathophysiological changes and risk factors and in characterizing the illness; further, similarities with other viral-onset illnesses such as myalgic encephalomyelitis/chronic fatigue syndrome and postural orthostatic tachycardia syndrome have laid the groundwork for research in the field. In this Review, we explore the current literature and highlight key findings, the overlap with other conditions, the variable onset of symptoms, long COVID in children and the impact of vaccinations. Although these key findings are critical to understanding long COVID, current diagnostic and treatment options are insufficient, and clinical trials must be prioritized that address leading hypotheses.

    (tags: long-covid covid-19 health medicine reviews nature papers)

Links for 2023-01-11

  • This app will self-destruct: How Belarusian hackers created an alternative Telegram

    Great idea:

    When a 25-year-old activist from Minsk who goes by Pavlo was detained by Belarusian KGB security forces last summer, he knew they would search his phone, looking for evidence of his involvement in anti-government protests. The police officer asked for Pavlo’s password to Telegram, the most popular messenger app among Belarusian activists, which he gave him. The officer entered it and... found nothing. All secret chats and news channels had disappeared, and after a few minutes of questioning Pavlo was released. Pavlo’s secret? A secure version of Telegram, developed by a hacktivist group from Belarus called the Cyber Partisans. Partisan Telegram, or P-Telegram, automatically deletes pre-selected chats when someone enters the so-called SOS password.
    ... after entering a fake [SOS] password, P-Telegram can automatically log out of the account, delete selected chats and channels, and even send a notification about the arrest of the account owners to their friends or families. P-Telegram also allows other activists to remotely activate the SOS password on the detainee’s phone. For this, they need to send a code word to any of the shared Telegram chats. Another feature on P-Telegram automatically takes photos of law enforcement officers on the front camera when they enter a fake password. “We warn users that this can be dangerous, as this photo will be stored on the phone, revealing that a person may use Partisan Telegram,” Shemetovets said.  Cyber Partisans are constantly updating their app, fixing bugs, and adding new features. They also regularly conduct independent audits to ensure that P-Telegram complies with all security measures. A recent audit by Open Technology Fund’s Red Team Lab proved that it is almost impossible for “casual observers without technical knowledge and specialized equipment” to identify the existence of P-Telegram on a device.

    (tags: p-telegram hacktivism security telegram messaging privacy activism duress-passwords)

Links for 2023-01-10

Links for 2023-01-09

  • A healthcare algorithm started cutting care, and no one knew why

    This is an absurd hellscape:

    Legal Aid filed a federal lawsuit in 2016, arguing that the state had instituted a new [healthcare] policy without properly notifying the people affected about the change. There was also no way to effectively challenge the system, as they couldn’t understand what information factored into the changes, De Liban argued. No one seemed able to answer basic questions about the process. “The nurses said, ‘It’s not me; it’s the computer,’” De Liban says. When they dug into the system, they discovered more about how it works. Out of the lengthy list of items that assessors asked about, only about 60 factored into the home care algorithm. The algorithm scores the answers to those questions, and then sorts people into categories through a flowchart-like system. It turned out that a small number of variables could matter enormously: for some people, a difference between a score of a three instead of a four on any of a handful of items meant a cut of dozens of care hours a month. (Fries didn’t say this was wrong, but said, when dealing with these systems, “there are always people at the margin who are going to be problematic.”) [...] From the state’s perspective, the most embarrassing moment in the dispute happened during questioning in court. Fries was called in to answer questions about the algorithm and patiently explained to De Liban how the system works. After some back-and-forth, De Liban offered a suggestion: “Would you be able to take somebody’s assessment report and then sort them into a category?” [...] Fries said he could, although it would take a little time. He looked over the numbers for Ethel Jacobs. After a break, a lawyer for the state came back and sheepishly admitted to the court: there was a mistake. Somehow, the wrong calculation was being used. They said they would restore Jacobs’ hours. “Of course we’re gratified that DHS has reported the error and certainly happy that it’s been found, but that almost proves the point of the case,” De Liban said in court. “There’s this immensely complex system around which no standards have been published, so that no one in their agency caught it until we initiated federal litigation and spent hundreds of hours and thousands of dollars to get here today. That’s the problem.”

    (tags: algorithms government health healthcare automation grim-meathook-future future)

Links for 2023-01-04

  • Turning Google smart speakers into wiretaps for $100k

    This is some very impressive work on reverse engineering a fairly advanced IoT device (the Google Home Mini), discovering and exploiting its security holes.

    I was recently rewarded a total of $107,500 by Google for responsibly disclosing security issues in the Google Home smart speaker that allowed an attacker within wireless proximity to install a “backdoor” account on the device, enabling them to send commands to it remotely over the Internet, access its microphone feed, and make arbitrary HTTP requests within the victim’s LAN (which could potentially expose the Wi-Fi password or provide the attacker direct access to the victim’s other devices). These issues have since been fixed.

    (tags: security google wiretapping exploits hacking iot reverse-engineering)

  • Infectiousness of SARS-CoV-2 breakthrough infections and reinfections during the Omicron wave | Nature Medicine

    This was an open question from earlier in the pandemic -- does vaccination reduce transmission and infectiousness: 'In our main analysis, we found that any COVID-19 vaccine reduced infectiousness by 22% (6–36%) and prior infection reduced infectiousness by 23% (3–39%). Hybrid immunity reduced infectiousness by 40% (20–55%).'

    (tags: immunity covid-19 infection transmission hybrid-immunity papers)

  • Caddy

    lhl likes Caddy:

    Caddy https://caddyserver.com/ came up in conversation earlier today. It's been my favorite reverse proxy/web server for the past few years because of how simple it is to setup and for it's automagic LetsEncrypt setup. (This post is actually being pushed through Caddy on my fediverse server, and was basically the easiest part of the setup). For those interested, it performs pretty competitively with nginx: https://blog.tjll.net/reverse-proxy-hot-dog-eating-contest-caddy-vs-nginx/ but IMO the main selling point (why I first installed it) was the automagic HTTPS setup: https://caddyserver.com/docs/automatic-https

    (tags: caddy reverse-proxies ops http https lets-encrypt servers)

Links for 2022-12-28

  • Bird.makeup

    A gateway bot from Twitter to Mastodon --

    One of the things I would miss here on Mastodon was all of the alerts from my local infrastructure and government twitter accounts. These will likely take a very long time to make the migration. With https://bird.makeup, you can create bot accounts that put those tweets in your Mastodon timeline.

    (tags: twitter mastodon gateways bots tweets)

Links for 2022-12-26

Links for 2022-12-20

Links for 2022-12-16

  • Digital scrapie

    "a hypothetical scenario in which a machine learning system trained on its own output becomes unable to function properly or make meaningful predictions"

    (tags: scrapie brains training ai ml feedback)

  • Clip retrieval

    Via ted byfield: "If you've wondered what AI-bots are ~thinking while they generate an image, here you go." Reverse-engineering the training samples which Stable Diffusion et al are combining for a given text query, in the laion5B or laion_400m datasets

    (tags: ai clips laion ml stable-diffusion text2image)

Links for 2022-12-15

Links for 2022-12-14

Links for 2022-12-13

  • The human cost of neurotechnology failure

    'This is your brain on capitalism'. A shitty cyberpunk future:

    What about when the [bricked] device is inside your body? Earlier this year, many people with Argus optical implants – which allow blind people to see – lost their vision when the manufacturer, Second Sight, went bust. Nano Precision Medical, the company's new owners, aren't interested in maintaining the implants, so that's the end of the road for everyone with one of Argus's "bionic" eyes. The $150,000 per eye that those people paid is gone, and they have failing hardware permanently wired into their nervous systems. Having a bricked eye implant doesn't just rob you of your sight – many Argus users experience crippling vertigo and other side effects of nonfunctional implants. The company has promised to "do our best to provide virtual support" to people whose Argus implants fail – but no more parts and no more patches."

    (tags: health implants cyberpunk future grim neurotechnology brain right-to-repair open-hardware open-source medicine capitalism ip ethics)

Links for 2022-12-09

Links for 2022-12-08

  • "A Brief History of InvSqrt"

    A 40-page Bachelor's degree thesis on the legendary bit-hacking Quake III Q_rsqrt() implementation (via redacted):

    This function, commonly called InvSqrt, approximates the inverse (or reciprocal) square root of a 32-bit floating point number very quickly. It can be found in many open source libraries and games on the Internet, such as the C source code for Quake III: Arena. This raises many questions. Why is it needed? Who wrote it? How does it work? How well does it work? Is it still useful with modern processors today? And finally, can it be improved to work better? This thesis will examine those questions and give a unique interpretation and optimization of the function itself.

    (tags: via:redacted sqrt maths quake-3 0x5f3759df)

  • sissbruecker/linkding: Self-hosted bookmark service

    an OSS clone of a Pinboard-style bookmark service. 'designed be to be minimal, fast, and easy to set up using Docker.' Bookmarking for emergency use only; if anything happens to Pinboard.in, I'll have this to fall back to. (via dahamsta)

    (tags: via:dahamsta bookmarks python oss links web)

Links for 2022-12-06

  • Home Assistant with a Solis Hybrid inverter

    good write-up on the process to get data out of the SolisCloud backend and into Home Assistant

    (tags: home-assistant home solar-power solis soliscloud)

  • @jm_links@botsin.space

    My Pinboard links feed is now on the Fediverse at botsin.space; I'll blog up the process shortly

    (tags: bots blog pinboard links mastodon)

  • WiFi calling blocked on Pixel phones

    what the hell? "Unless you're on an operator that sells Pixel phones directly, who basically comprise the "Google list" for these features, [wifi calling] won't work for any [directly-purchased] Pixel phone [in Ireland]. Same all over Europe. VoLTE won't work either when on a mobile network (data speeds will drop to 3G when on a voice call) [...] Your only option would be to root the phone to get it to work. There seem to have been some recent changes on this but seems like Eir still no go." I've been wondering why VoLTE and VoWifi have been unavailable on my phone for several months now, assuming it was an operator issue. Finally I was sent this link by a poster on another forum -- it's not an issue with the operator, it's a builtin limitation on the phone. All I can presume is that Google have done exclusivity deals with some providers in some regions, but is keeping this secret for some reason. If I'd known this in advance, I'd probably have bought a different phone; absolutely terrible decision. Reportedly it can be reversed via rooting the phone, at least.

    (tags: android google pixel wifi-calling vowifi volte lte mobile)

Links for 2022-12-05

  • AI-generated answers temporarily banned on Stack Overflow

    Ranked user-generated content sites like Stack Overflow are really going to have a problem with the incoming plausible-sounding bullshit flood:

    “The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” wrote the mods (emphasis theirs). “As such, we need the volume of these posts to reduce [...] So, for now, the use of ChatGPT to create posts here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT after this temporary policy is posted, sanctions will be imposed to prevent users from continuing to post such content, even if the posts would otherwise be acceptable.”

    (tags: chatgpt ai autocomplete stack-overflow coding spam ugc)

  • Cory Doctorow Wants You to Know What Computers Can and Can’t Do

    "Do you think that the concern over A.I.’s expanding capabilities is misplaced? I do. I think that the problems of A.I. are not its ability to do things well but its ability to do things badly, and our reliance on it nevertheless. So the problem isn’t that A.I. is going to displace all of our truck drivers. The fact that we’re using A.I. decision-making at scale to do things like lending, and deciding who is picked for child-protective services, and deciding where police patrols go, and deciding whether or not to use a drone strike to kill someone, because we think they’re a probable terrorist based on a machine-learning algorithm—the fact that A.I. algorithms don’t work doesn’t make that not dangerous. In fact, it arguably makes it more dangerous. The reason we stick A.I. in there is not just to lower our wage bill so that, rather than having child-protective-services workers go out and check on all the children who are thought to be in danger, you lay them all off and replace them with an algorithm."

    (tags: ai ml cory-doctorow tech future capitalism)

Links for 2022-11-29

  • Sumana Harihareswara: "Pinboard brittleness"

    Worrying thread -- I didn't realise Pinboard was at risk of atrophy. This blog is built on it!

    (tags: pinboard software atrophy future via:mefi via:danny)

  • Pushwoosh and the Pincer Trojan

    yikes. "U.S. Govt. Apps Bundled Russian Code With Ties to Mobile Malware Developer":

    A recent scoop by Reuters revealed that mobile apps for the U.S. Army and the Centers for Disease Control and Prevention (CDC) were integrating software that sends visitor data to a Russian company called Pushwoosh, which claims to be based in the United States. But that story omitted an important historical detail about Pushwoosh: In 2013, one of its developers admitted to authoring the Pincer Trojan, malware designed to surreptitiously intercept and forward text messages from Android mobile devices.

    (tags: trojans pushwoosh push-notifications apps mobile)

Links for 2022-11-22

Links for 2022-11-21

Links for 2022-11-18

  • Your EU consumer rights

    A little-known detail of the EU Consumer Rights Directive: you have a right to repair or replacement of faulty goods if they fail within 2 years of purchase. The nice thing about this is that so much hardware has built-in obsolescence after only 1 year... you may have to invoke the magic words "EU Consumer Rights Directive" to get this to happen, though. Worth noting that according to one account "the rights only apply in the country of purchase. I've had Apple refuse to replace a Magic trackpad that died after 14 months and they would not repair an Airpods case that died after 18 months. I had purchased both in the UK."

    (tags: built-in-obsolescence hardware rights consumer-rights eu right-to-repair repair faulty-goods)

  • Irish consumer law gives 6 years of repair/replacement rights

    Even better than the EU consumer rights directive!

    Under Irish consumer law, consumers are entitled to a free of charge repair or (depending on the circumstances) may be entitled to a replacement, discount or refund by the seller, of defective goods or goods which do not conform with the contract of sale. These rights expire six years from delivery of the goods.

    (tags: consumer-rights law ireland consumer rights repair)

Links for 2022-11-16

  • Dan Luu on the "cold boot" scenario

    Thought-provoking Mastodon thread about full-scale disaster recovery for large-scale modern software platforms. Here's a gem:

    When I was in Azure, I asked around about what the plan was if "the really big one" hit since deep expertise was nearly totally concentrated in Redmond and, at the time, Azure was guaranteed to have a global outage if a major earthquake incapacitated Redmond. Of course the plan was that there was no real plan and people expected that Azure would have a very extended global outage and an org that was on its way to becoming a $1T business unit would have its value basically wiped out.

    (tags: cold-boot software tech it ops disaster-recovery azure dan-luu)

  • The scary truth about AI copyright is nobody knows what will happen next - The Verge

    Generative AI has had a very good year. Corporations like Microsoft, Adobe, and GitHub are integrating the tech into their products; startups are raising hundreds of millions to compete with them; and the software even has cultural clout, with text-to-image AI models spawning countless memes. But listen in on any industry discussion about generative AI, and you’ll hear, in the background, a question whispered by advocates and critics alike in increasingly concerned tones: is any of this actually legal?

    (tags: ai copyright ml law ip)

Links for 2022-11-15

  • Cloud Jewels

    Etsy: "Estimating kWh in the Cloud":

    We thought about how we might be able to estimate our energy consumption in Google Cloud using the data we do have: Google provides us with usage data that shows us how many virtual CPU (Central Processing Unit) seconds we used, how much memory we requested for our servers, how many terabytes of data we have stored for how long, and how much networking traffic we were responsible for. Our supposition was that if we could come up with general estimates for how many watt-hours (Wh) compute, storage and networking draw in a cloud environment, particularly based on public information, then we could apply those coefficients to our usage data to get at least a rough estimate of our cloud computing energy impact. We are calling this set of estimated conversion factors Cloud Jewels. Other cloud computing consumers can look at this and see how it might work with their own energy usage across providers and usage data. The goal is to help cloud users across the industry to help refine our estimates, and ultimately help us encourage cloud providers to empower their customers with more accurate cloud energy consumption data.
    This is a good interim step, but it's disappointing how inaccurate the CO2 data exposed by cloud providers is. IMO this needs to be fixed

    (tags: climate co2 google cloud aws etsy estimation cloud-jewels sustainability)

Links for 2022-11-14

  • The Fediverse From Home

    Interesting -- I didn't realise it was possible to connect to the Mastodon fediverse with such a low-impact service --

    A single-user instance with about 100 followers/followees uses somewhere between 50 to 100MB of RAM. CPU usage is only intensive when handling media or processing lots of federation requests.

    (tags: fediverse mastodon self-hosting hosting ops gotosocial)

  • ‘Immunity debt’

    A new form of COVID-19 misinformation has cropped up in Canada:

    The term “immunity debt” is circulating widely online as an explanation for a significant surge in respiratory illness in Canada [... This] hypothesis suggests people’s immune systems are weaker now, due to a lack of exposure to viruses while observing COVID-19 public health measures over the last two-and-a-half years. But this notion [...] is simply not true, says Colin Furness, an infection control epidemiologist and assistant professor in the faculty of information at the University of Toronto. “That is, in my estimation, and any immunologist will tell you this, nonsense,” he said. Dr. Samira Jeimy, an allergist and clinical immunologist at St Joseph’s Health Care London, agrees, saying the idea that one’s immune system can be weakened due to lack of exposure to illness “shows a basic lack of understanding of how the immune system works.” “There’s almost like an old wives tale, that you need to get sick to develop a healthy immune system. That’s actually not true.”

    (tags: immunity immunology covid-19 rsv viruses health medicine immunity-debt misinformation)

Links for 2022-10-25

Links for 2022-10-21

  • RIAA Flags ‘Artificial Intelligence’ Music Mixer as Emerging Copyright Threat

    They would, naturally....

    “There are online services that, purportedly using artificial intelligence (AI), extract, or rather, copy, the vocals, instrumentals, or some portion of the instrumentals from a sound recording, and/or generate, master or remix a recording to be very similar to or almost as good as reference tracks by selected, well known sound recording artists [...] To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights by making unauthorized copies of our members works. In any event, the files these services disseminate are either unauthorized copies or unauthorized derivative works of our members’ music"

    (tags: ai music riaa ml copyright)

Links for 2022-10-20

  • Liz Fong-Jones talks about Google's history of loadbalancers

    "Okay, the time has come, it's been an entire decade, let's talk about loadbalancing techniques and how they evolved at Google in response to various practical failure modes, from 2008 to 2012." This thread is great. A solid history of Google's use of various load balancing techniques, ranging from N+1 service duplication with implicit failover rules, modern-service-mesh-style proxying, client-side builtin load balancing libs, followed by local sidecars which downloaded routing assignment configs periodically and operated mainly offline.

    (tags: load-balancing history google ops liz-fong-jones service-meshes sidecars proxies)

Links for 2022-10-19

  • Latest Long Covid estimates

    tl;dr: 6.2% average rate, more women than men, 15% continued to suffer after 12 months.

    A total of 1.2 million individuals who had symptomatic SARS-CoV-2 infection were included (mean age, 4-66 years; males, 26%-88%). In the modeled estimates, 6.2% (95% uncertainty interval [UI], 2.4%-13.3%) of individuals who had symptomatic SARS-CoV-2 infection experienced at least 1 of the 3 Long COVID symptom clusters in 2020 and 2021, including 3.2% (95% UI, 0.6%-10.0%) for persistent fatigue with bodily pain or mood swings, 3.7% (95% UI, 0.9%-9.6%) for ongoing respiratory problems, and 2.2% (95% UI, 0.3%-7.6%) for cognitive problems after adjusting for health status before COVID-19, comprising an estimated 51.0% (95% UI, 16.9%-92.4%), 60.4% (95% UI, 18.9%-89.1%), and 35.4% (95% UI, 9.4%-75.1%), respectively, of Long COVID cases. The Long COVID symptom clusters were more common in women aged 20 years or older (10.6% [95% UI, 4.3%-22.2%]) 3 months after symptomatic SARS-CoV-2 infection than in men aged 20 years or older (5.4% [95% UI, 2.2%-11.7%]). Both sexes younger than 20 years of age were estimated to be affected in 2.8% (95% UI, 0.9%-7.0%) of symptomatic SARS-CoV-2 infections. The estimated mean Long COVID symptom cluster duration was 9.0 months (95% UI, 7.0-12.0 months) among hospitalized individuals and 4.0 months (95% UI, 3.6-4.6 months) among nonhospitalized individuals. Among individuals with Long COVID symptoms 3 months after symptomatic SARS-CoV-2 infection, an estimated 15.1% (95% UI, 10.3%-21.1%) continued to experience symptoms at 12 months.

    (tags: long-covid statistics disease covid-19 papers jama disability)

Links for 2022-10-14

  • The hygiene hypothesis doesn't apply to viruses

    Fascinating interview with Dr. Marsha Wills-Karp, an expert on the environmental determinants of immune diseases:

    Almost no virus is protective against allergic disease or other immune diseases. In fact, infections with viruses mostly either contribute to the development of those diseases or worsen them. The opposite is true of bacteria.
    Pets are good, though:
    We've also noticed that people who live on farms have fewer of these diseases because they're exposed to -- for lack of a better term -- the fecal material of animals. And what we have found is that it's due to these commensal bacteria. That is one of the components that helps us keep a healthy immune system. Most of us will probably not adopt farm life. But we can have a pet, we can have a dog.

    (tags: pets viruses bacteria hygiene hygiene-hypothesis health immune-system allergies farms)