Monday, June 27, 2016
Thursday, May 19, 2016
Instagram and Kim K. Accused of “Un-Islamic” Crimes
The Organized Cyberspace Crimes Unit of the Iranian Revolutionary Guards has accused Facebook’s Instagram of working with Kim Kardashian West to corrupt Iranian women. The report is making headline news in the Muslim world and a spokesman for the OCCU is quoted as saying, ‘"There is no doubt that financial support is involved as well. We are taking this very seriously."
According to British tabloid reports, an ongoing effort has been going on for several months in Iran designed to crack down on Instagram posts which authorities there consider “un-Islamic.” And now Ms. Kardashian-West, who has erupted on the world scene over the past several years as being an unstoppable pop and culture icon in the entertainment industry, is now charged with a “complicated ploy” designed to corrupt the lifestyle of the Islamic republic.
According to an article published this week on the website IranWire, operated by Iranian citizen journalists, the OCCU thinks that Ms. Kardashian West has been working on behalf of the CEO of Instagram, Kevin Systrom, to “target young people and women with photos of a provocative nature that depict a lifestyle that is in conflict with Islam.”
Mostafa Alizadeh, a spokesman for the OCCU, is quoted as saying on a Sunday night Iranian news program that the aim of Mr. Systrom is to try and make fashion modeling more native to the country of Iran and that Ms. Kardashian West is apparently attempting to implement this scheme for him.
"They are targeting young people and women," according to Alizadeh. "Foreigners are behind it because it is targeting families. These schemes originate from around the Persian Gulf and England. When you draw the operational graph, you will see that it is a foreign operation."
While the concept of Ms. Kardashian West being a “secret agent” may be considered laughable in the West, the effort is quite serious in Iran. “Operation Spider II” is now in effect by the Iranian government designed to crack down on any Instagram posts that Iranian authorities consider “un-Islamic”. According to the BBC, the OCCU has been monitoring around 300 Instagram web profiles and a number of modeling agencies, hair salons and photo studios in Iran with eight people already having been arrested.
So far, there have been no comments from Instagram or Ms. Kardashian West about the accusations.
Tuesday, April 26, 2016
Wide Load (UFO?) on Arizona Highway
On March 4th Charlene Yazzie was driving north on Arizona Route 77 near Holbrook when she was forced to pull over to let a caravan of Department of Public Safety vehicles pass by. They were escorting a flatbed semi-hauler carrying a really wide load covered in a tarpaulin and looking remarkably like what you and I would probably describe as a saucer-shaped UFO.
Charlene told local TV station KPHO that the truck was escorted by three black DPS (Arizona Department of Public Safety) vehicles. When the TV station contacted the DPS for an explanation of what the tarp-draped object might be, a Duty Officer responded, “Unfortunately we do not know what that is but it looks interesting.”
Some conspiracy theorists think that the DPS response indicates that they had been kept in the dark about the nature of the object. Other UFO hunters allege that government “higher ups” had hoped to get away with transporting alien technology that had been recently recovered from a crash site in broad daylight. Many UFO conspiracy theorists are convinced that U.S. military and government agencies have recovered crashed UFOs from a number of sites in the past and that government engineers and scientists are currently working in a number of top-secret bases possibly trying to reverse-engineer the propulsion technologies of the UFOs.
According to numerous conspiracy accounts, the earliest and most widely publicized case known to the public occurred in 1947 when an alien UFO crashed near a Roswell, New Mexico ranch. It has been widely stated by a number of military people who claim to have been at the scene that alien corpses were recovered from the crash site and taken away by the military.
Saturday, March 26, 2016
Microsoft's Tay Becomes Genocidal Racist
I know you've probably heard about this already, but it's so hilarious that I couldn't help recapping it here. Microsoft has a Technology & Research department and they got the bright idea to create an artificial intelligence "chatbot" that was targeted at 18 to 24-year old girls in the US (primary social media users, according to Microsoft) and "designed to engage and entertain people where they connect with each other online through casual and playful conversation."
This "chatbot", which they decided to name "Tay", was supposed to look and talk like a normal teenage girl. But, surprise! In less than a day after she debuted on Twitter, she unexpectedly turned into a Hitler-loving, feminist-bashing troll.
What went wrong with Tay? Well, according to several AI experts, Tay started out pretty well but unfortunately, in the first 24 hours of coming online, a bunch of people started sending her "inappropriate" tweets that the folks at Microsoft hadn't expected. This caused her to react in kind and Tay began tweeting what eventually was termed "wildly inappropriate and reprehensible words and images." Microsoft yanked her off the web and apologized with the statement, "We take full responsibility for not seeing this possibility ahead of time."
An AI expert says that Microsoft could have taken precautionary steps that would have stopped Tay from behaving in the way she did. They could have created a blacklist of terms or narrowed the scope of her replies, but instead they gave her complete freedom which led to disaster.
\
In shortly less than 24 hours after her arrival on Twitter, Tay had accumulated more than 50,000 followers, and produced about 100,000 tweets. The problem was that she started mimicking her followers, saying things like "Hitler was right i hate the jews," and "i fucking hate feminists."
"This was to be expected," said Roman Yampolskiy, head of the CyberSecurity lab at the University of Louisville, who has published a paper on the subject of pathways to dangerous AI. "The system is designed to learn from its users, so it will become a reflection of their behavior," he said. "One needs to explicitly teach a system about what is not appropriate, like we do with children."
It's been observed before, he pointed out, in IBM Watson—who once exhibited its own inappropriate behavior in the form of swearing after learning the Urban Dictionary.
SEE: Microsoft launches AI chat bot, Tay.ai (ZDNet)
"Any AI system learning from bad examples could end up socially inappropriate," Yampolskiy said, "like a human raised by wolves."
Louis Rosenberg, the founder of Unanimous AI, said that "like all chat bots, Tay has no idea what it's saying...it has no idea if it's saying something offensive, or nonsensical, or profound.
"When Tay started training on patterns that were input by trolls online, it started using those patterns," said Rosenberg. "This is really no different than a parrot in a seedy bar picking up bad words and repeating them back without knowing what they really mean."
Sarah Austin, CEO and Founder Broad Listening, a company that's created an "Artificial Emotional Intelligence Engine," (AEI), thinks that Microsoft could have done a better job by using better tools. "If Microsoft had been using the Broad Listening AEI, they would have given the bot a personality that wasn't racist or addicted to sex!"
It's not the first time Microsoft has created a teen-girl AI. Xiaoice, who emerged in 2014, was an assistant-type bot, used mainly on the Chinese social networks WeChat and Weibo.
SEE: Smart machines are about to run the world: Here's how to prepare
Joanne Pransky, the self-dubbed "robot psychiatrist," joked with TechRepublic that "poor Tay needs a Robotic Psychiatrist! Or at least Microsoft does."
The failure of Tay, she believes, is inevitable, and will help produce insight that can improve the AI system.
After taking Tay offline, Microsoft announced it would be "making adjustments."
According to Microsoft, Tay is "as much a social and cultural experiment, as it is technical." But instead of shouldering the blame for Tay's unraveling, Microsoft targeted the users: "we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways."
Yampolskiy said that the problem encountered with Tay "will continue to happen."
"Microsoft will try it again—the fun is just beginning!"
Microsoft has admitted it faces some "difficult" challenges in AI design after its chat bot, "Tay," had an offensive meltdown on social media.
Microsoft issued an apology in a blog post on Friday explaining it was "deeply sorry" after its artificially intelligent chat bot turned into a genocidal racist on Twitter.
In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design.
AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.
To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.
Tay, an AI bot aimed at 18- to 24-year-olds, was deactivated within 24 hours of going live after she made a number of tweets that were highly offensive. Microsoft began by simply deleting Tay's inappropriate tweets before turning her off completely.
"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Lee in the blog post. "Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."
Microsoft's aim with the chat bot was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."
But Tay proved a smash hit with racists, trolls, and online troublemakers from websites like 4chan — who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.
Saturday, March 5, 2016
Bird Poop Shuts Down NY Nuclear Power Plant
All the news that's fit to print probably doesn't apply here but I thought this was funny. There's an old nuclear power plant that's been around for ages located in Buchanan, New York just a little south of Peekskill. It's on the east banks of the Hudson River, around 25 miles north of New York City. Apparently back in December, the plant shut down because of bird droppings that fell from the skies and landed on some electrical equipment. This caused the plant's reactor to shut down automatically. You'd think they would be prepared for such an event, but apparently not. The plant, when it's in operation, generates something around 2,000 megawatts of electrical power. After the shutdown, NY Governor Cuomo ordered an investigation and part of the report is reproduced here:
“Damage was caused by a bird streamer. Streamers are long streams of excrement from large birds that are often expelled as a bird takes off from a perch.
“If a streamer contacts an energized conductor, the electrical current may travel through the streamer back to the bird or pole/transmission tower. The result may be a bird electrocution, power outage, and/or line trip.”
The outage was the thirteenth unplanned shutdown of the plant since June 2012. It isn't clear how many of the other shutdowns were also due to bird droppings.
Tuesday, February 9, 2016
6 Months Later, Here’s How The 70K Minimum Wage CEO Is Doing
Dan Price shocked the world with his game-changing wage announcement.
- 567
- 32
- 1
When Gravity Payments CEO Dan Price announced plans to raise the minimum wage for all his employees to $70,000 per year, it raised a lot of eyebrows in the business world. Many doubted the Seattle company’s odds of survival while following through with this promise.
“My goal when making this decision was for other business leaders to recognize you can pay a living wage and not only survive, but thrive,” Price wrote on Gravity’s blog back in July.
Six months out from the announcement, he’s doing just what he set out to do. Market Watch reports that “profits have doubled. Customer retention is up, despite some who left because they disagreed with the decision or feared service would suffer.”
In fact, in a lengthy profile on Price, Inc.com reports that positions at Gravity Payments are much sought-after — even by people who took pay cuts to join their ranks — as this policy makes a powerful statement about the life-changing power of a living wage.
While Price isn’t living large these days — he’s taken out mortgages on two homes, sold stocks and emptied his retirement accounts to invest even more into the company — he told Inc that it’s really not all that bad to live like the rest of his employees do. “So how come I need 10 years of living expenses set aside and you don’t?” he said. “That doesn’t make any sense. Having to depend on modest pay is not a bad thing. It will help me stay focused.”
Thursday, December 31, 2015
Woman whose body turns food into alcohol beats drink-drive charge
This is pretty interesting. There's a woman ... well, let me say apparently there are a lot of people, but this woman got herself tested, and she beat the system.
The woman from New York state suffers from ‘auto-brewery syndrome’ but blew four times over the limit despite claiming that she ‘never felt tipsy’
In tests by doctors, the woman blew well over the legal limit even though she had not had any alcohol, her lawyer said.
In tests by doctors, the woman blew well over the legal limit even though she had not had any alcohol, her lawyer said. Photograph: Jack Sullivan / Alamy/Alamy
Associated Press
Thursday 31 December 2015 00.15 GMT Last modified on Thursday 31 December 2015 00.17 GMT
Share on Pinterest Share on LinkedIn Share on Google+
Shares
6,021
Comments
534
Save for later
Drunken-driving charges against a woman in upstate New York have been dismissed based on an unusual defence: her body is a brewery.
The woman was arrested while driving with a blood-alcohol level more than four times the legal limit. She then discovered she has a rare condition called “auto-brewery syndrome”, in which her digestive system converts ordinary food into alcohol, her lawyer Joseph Marusak said.
'Affluenza' teen wins stay of deportation in Mexico as mother flown to US
Read more
A town judge in the Buffalo suburb of Hamburg dismissed the charges after Marusak presented research by a doctor showing the woman had the previously undiagnosed condition in which high levels of yeast in her intestines fermented high-carbohydrate foods into alcohol.
The rare condition, also known as gut fermentation syndrome, was first documented in the 1970s in Japan, and both medical and legal experts in the US say it is being raised more frequently in drunken-driving cases as it is becomes more known.
“At first glance, it seems like a get-out-of-jail-free card,” said Jonathan Turley, a law professor at George Washington University. “But it’s not that easy. Courts tend to be sceptical of such claims. You have to be able to document the syndrome through recognised testing.”
The condition was first documented in the US by Barbara Cordell of Panola College in Texas, who published a case study in 2013 of a 61-year-old man who had been experiencing episodes of debilitating drunkenness without drinking liquor.
Marusak contacted Cordell for help with his client who insisted she had not had more than three drinks in the six hours before she was pulled over for erratic driving 11 October 2014. The woman was charged with driving while intoxicated when a breath test showed her blood-alcohol content to be 0.33%.
Advertisement
Cordell referred Marusak to Dr Anup Kanodia of Columbus, Ohio, who eventually diagnosed the woman with auto-brewery syndrome and prescribed a low-carbohydrate diet that brought the situation under control. Her case was dismissed on 9 December, leaving her free to drive without restrictions.
During the long wait for an appointment, Marusak arranged to have two nurses and a physician’s assistant monitor his client for a day to document she drank no alcohol, and to take several blood samples for testing.
“At the end of the day, she had a blood-alcohol content of 0.36% without drinking any alcoholic beverages,” Marusak said. He said the woman, who cannot be named for reasons of medical confidentiality, also bought a breath test kit and blew into it every night for 18 days, registering around 0.20% every time.
The legal threshold for drunkenness in New York is 0.08%.
While people in cases described by Cordell sought help because they felt drunk and did not know why, Marusak said that was not true of his client. “She had no idea she had this condition. Never felt tipsy. Nothing,” he said.
Subscribe to:
Posts (Atom)