Web Development sales and sales engineering professional combining that background with a Citizen Access and Identity Management solution to help public sector entities achieve their digital transformation goals.
Just saw a post about using AI to replace the security staff intently watching the cameras in retail spaces. That's fine, as long as we legislate that false positives cost the retailer an immediate, in cash, $100 payment to the affected shopper. Smart hackers will quickly figure out how to game the system and make a living on the $100 payouts. Retailers will quickly figure out that the SAAS subscription costs and $100 payouts cost more than they are saving in reduced loss. Too bad they signed a 3-year contract. This post was not written by AI. But the below paragraph was written by Google Gemini, and it's not bad, but it's boring. The use of AI to detect shoplifting raises serious privacy concerns. While retailers may see it as a solution to shrink, the potential for misuse and discrimination is significant. AI systems are trained on data that can reflect existing biases, leading to profiling and wrongful accusations. Additionally, the constant surveillance of shoppers erodes trust and creates a chilling effect on the shopping experience. We need to have a conversation about the ethical implications of this technology and ensure that it's used responsibly and transparently
Web Development sales and sales engineering professional combining that background with a Citizen Access and Identity Management solution to help public sector entities achieve their digital transformation goals.
5dI'll note that the AI paragraph above sounds a lot like many of the posts in my LI feed.