Facial recognition software is not ready for use by law enforcement | TechCrunch
This is a pretty amazing op-ed from the CEO of a facial recognition software development company:
Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie. And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether. There’s really no “nice” way to acknowledge these things. I’ve been pretty clear about the potential dangers associated with current racial biases in face recognition, and open in my opposition to the use of the technology in law enforcement. As the black chief executive of a software company developing facial recognition services, I have a personal connection to the technology, both culturally and socially. Having the privilege of a comprehensive understanding of how the software works gives me a unique perspective that has shaped my positions about its uses. As a result, I (and my company) have come to believe that the use of commercial facial recognition in law enforcement or in government surveillance of any kind is wrong — and that it opens the door for gross misconduct by the morally corrupt.
(tags: techcrunch facial-recognition computer-vision machine-learning racism algorithms america)
Yelp, The Red Hen, And How All Tech Platforms Are Now Pawns In The Culture War
Though the brigading of review sites and doxxing behavior isn’t exactly new, the speed and coordination is; one consequence of a never-ending information war is that everyone is already well versed in their specific roles. And across the internet, it appears that technology platforms, both big and small, must grapple with the reality that they are now powerful instruments in an increasingly toxic political and cultural battle. After years attempting to dodge notions of bias at all costs, Silicon Valley’s tech platforms are up against a painful reality: They need to expect and prepare for the armies of the culture war and all the uncomfortable policing that inevitably follows. Policing and intervening isn’t just politically tricky for the platforms, it’s also a tacit admission that Big Tech’s utopian ideologies are deeply flawed in practice. Connecting everyone and everything in an instantly accessible way can have terrible consequences that the tech industry still doesn’t seem to be on top of. Silicon Valley frequently demos a future of seamless integration. It’s a future where cross-referencing your calendar with Yelp, Waze, and Uber creates a service that’s greater than the sum of its parts. It’s an appealing vision, but it is increasingly co-opted by its darker counterpart, in which major technology platforms are daisy-chained together to manipulate, abuse, and harass.
(tags: culture-war technology silicon-valley yelp reviews red-hen dystopia spam doxxing brigading politics)
AWS Developer Forums: m5.xlarge in us-east-1 has intermittent DNS resolution failures
likewise for C5 instance types — reportedly still an issue
ICE’s Risk Classification Assessment turned into a digital rubber stamp
If this report is correct, this “statistics-based” risk classification tool is just a cruel joke:
To conform to Trump’s policies, Reuters has learned, ICE modified a tool officers have been using since 2013 when deciding whether an immigrant should be detained or released on bond. The computer-based Risk Classification Assessment uses statistics to determine an immigrant’s flight risk and danger to society. Previously, the tool automatically recommended either “detain” or “release.” Last year, ICE spokesman Bourke said, the agency removed the “release” recommendation
(tags: immigration statistics machine-learning rubber-stamping fake-algorithms whitewashing ice us-politics)