AI Daily: Adobe detects photoshopped images, predicting school shootings, can AI diagnose skin cancer? 

In today’s AI Daily, we feature two use cases of advanced image analysis: Adobe‘s tool to identify photoshopped images and protect original authenticity, and new research for skin care detection using convolutional neural networks. We also picked a story discussing the potential use case of AI to predict violent behavior and school shootings.

Adobe is using AI to catch Photoshopped images (engadget.com)

  • Using AI to find fake images is a way for Adobe to help ” increase trust and authenticity in digital media,” the company says.
  • The team trained the AI to figure out the type of manipulation used on an image and to flag the area of a photo that someone changed.
  • The research team says it might harness the AI to examine other types of artifacts, like those caused by compression when a file is saved repeatedly.

Man against machine. Can AI outperform dermatologists when diagnosing skin cancer? (medium.com)

  • A study conducted by an international team of researchers pitted experienced dermatologists against a machine learning system, known as a deep learning convolutional neural network, or Cnn, to find out who is better at detecting malignant melanomas.
  • To train it, we showed the Cnn more than 100,000 images of malignant and benign skin cancers and moles and indicated the diagnosis for each image.
  • The doctors were shown 100 images of skin lesions and asked to make a diagnosis, using their judgment about whether it was a malignant melanoma or benign mole.
  • The Cnn missed fewer melanomas, meaning it had a higher sensitivity than the dermatologists, and it misdiagnosed fewer benign moles as malignant melanoma, which means it had a higher specificity; this would result in less unnecessary surgery.

Can Artificial Intelligence Help Stop School Shootings? (smithsonianmag.com)

  • So Nallapati, who’s been studying artificial intelligence in high school, reached out to other young women she knows through a program called Aspirations in Computing that’s run by the National Center for Women & Information Technology.
  • Specifically, the scientists found that AI was as accurate as a team of child and adolescent psychiatrists when it came to assessing the risk of violent behavior, based on interviews with 119 kids between the ages of 12 and 18.
  • More than 91 percent of the time, the algorithm, using only the transciripts, aligned with the more extensive assessments of a team of child and adolescent psychiatrists, who also had access to information from parents and schools.
  • “We are now seeing a trend of AI being applied to very sensitive domains at alarming speeds, and people making these algorithms don’t necessarily understand all the social, and even political, aspects of the data they’re using,” says Rashida Richardson, director of policy research at the AI Now Institute , a program at New York University that studies the social implications of artificial intelligence.
  • “Trying to understand very complicated issues that have a myriad of input and applying a tech solution that reflects a sliver of it is a problem because it can either reiterate the same problems we see in society or create a solution for a problem that’s not there.”

Exactly What Is The Difference Between A Copywriter And Content Writer?

Ever scratched your head over the distinction between a copywriter and a content writer? How is one truly different from the other?

At Frase we’re not ones to sit on our hands. We scoured the web using our cutting-edge research technology to identify the online sources most closely answering our question.

Having found the 12 most relevant sources offered up by Frase’s Research Assistant, we reviewed the custom summaries generated by the system and whittled them down to the five most relevant articles from such publishers as Skyword and Copyblogger.

The insights we unearthed are illuminating.

One critical distinction between a copywriter and a content writer our sources are agreed upon involves the intention of the writing. Copywriting is seen largely as marketing copy, intended to persuade. Slogans, taglines, “About us” copy, even headlines designed to entice clicks, are examples of the copywriter’s craft.

By contrast, content writing is viewed as informing and educating. Blog postswhite paperslonger-form product descriptions all fall into this category.

But it isn’t always cut and dried. Our expert sources acknowledge that good copywritingoften includes informational content, and that informational content may incorporate elements of copy. A white paper needs a catchy headline after all or nobody is going to read it. Content without copywriting is a waste of good content, as Copyblogger says.

But don’t take our word for it. Here are Frase’s AI-generated summaries of our five sources, with links out to each of the original articles.

What’s the Difference Between Content Marketing and Copywriting? – Copyblogger (copyblogger.com)

  • If you’re writing great articles that people would love to read, but you’re not getting the traffic you want, the problem may be ineffective copywriting: Your headlines might be too dull.
  • Just like a product has to have a benefit to the buyer, your content has to be inherently rewarding to readers or they won’t come back to your website .
  • It’s tricky to show readers your blog is a cool place to hang out when you don’t have lots of readers yet, but we have a few tips for you .

Copy vs. Content Writing: What’s the Difference? (skyword.com)

Topics: landing pagecontent marketingonline mediasocial mediadirect mailblog postend usercopywriting

  • Because there’s general confusion on the differences between copy and content, you can decide how to define each category on your own terms, and expect that they may be used interchangeably depending on who you work with.
  • I consider most of the work I do to be content because it’s longer form, such as this blog post, and I’m almost always giving advice.
  • When you start working on a project, it’s more important to ask yourself why the project requires the written word, rather than whether it’s copy or content.
  • The recruiter I met with may always use different words to describe the services I offer, and that’s OK. As a content creator, it’s most important to figure out what you’re good at, and show samples to anyone who’s curious.

Content Writer vs Copywriter: What’s the Difference? (koozai.com)

Topics: search engineblog postsdisplay advertisingmarketing materialwritten wordMetaskill setscomment section

  • They both act as a cornerstone for the other; think of them as two different sides of the same coin, as although there are similarities in the two skill sets, there are also some clear differences, too.
  • When a Content Writer creates a piece of content they are most likely considering the use of keywords, meta, and how shares and links to the piece will amplify the content.

Content Writer vs. Copywriter: What’s the Difference? (marketingprofs.com)

  • I may not be in the market for a car or a pizza or new type of toothbrush, but effective copy can persuade me to consider one.
  • If the decision is to provide a core dump that attempts to be all things to all people or write copy that tries to sell something to readers or viewers or listeners who don’t want to buy, it’s time for client management to undergo remedial training.

Copywriting vs. Content Writing: What’s the Difference Between the Two? – Quietly Blog (blog.quiet.ly)

  • The ultimate objective of copywriting is to sell an idea whereas the ultimate objective of content writing aims to create valuable content to help the audience understand your brand and generate interest.
  • Bottom line: A copywriter is a professional who writes marketing copy; a content writer can be anyone producing content.
  • What you really need to know is that brands of all kinds need copywriting and content writing to stay fresh, so there’s plenty of opportunities for writers out there to try their hands at both.

AI Daily: detecting nuclear pulses, Salesforce’s NLP breakthrough, is deep learning hitting a wall?

In today’s AI Daily, we feature how a neural network can detect nuclear events with more accuracy than human experts. The program sorted more than 99.9 percent of the pulses correctly. In other news, Salesforce just unveiled a major advancement in NLP creating a very efficient pipeline that can perform 10 tasks at once with implications in question-answering techniques, among others. Finally, we picked a story that debates whether we’ve hit a wall with deep learning research.

Enhanced detection of nuclear events, thanks to deep learning (phys.org)

  • A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than—today’s best automated methods or even human experts.
  • The progress tackling some of the most complex problems of the environment, the cosmos and national security comes from scientists at the Department of Energy‘s Pacific Northwest National Laboratory who presented their work at the 11th MARC conference—Methods and Applications of Radioanalytical Chemistry—in April in Hawaii.
  • Results are even more impressive when the data is noisy and includes an avalanche of spurious signals: In an analysis involving 50,000 pulses, the neural network agreed 100 percent of the time with the human expert, besting the best conventional computerized techniques which agreed with the expert 99.8 percent of the time.

Salesforce unveils a big advance in natural language processing (siliconangle.com)

  • Just a few years ago, asking your phone a question to find information on the internet was, well, pretty much out of the question because computers weren’t all that great at understanding phrases outside a narrow few.
  • The paper is essentially a challenge, called the Natural Language decathlon or decaNLP for short, paired with a “multitask question answering network” or MQAN model that jointly learns all 10 tasks at once.
  • In other words, researchers and developers essentially can use one tool instead of having to use one for each of those tasks, which have required hypercustomized models that can’t be used for any other task.
  • MQAN allows for what is known as “zero-shot” learning, which means the model can handle tasks it hasn’t seen before or been trained to do.

Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So (nytimes.com)

  • In a widely read article published early this year on arXiv.org, a site for scientific papers, Gary Marcus , a professor at New York University, posed the question: “Is deep learning approaching a wall?” He wrote, “As is so often the case, the patterns extracted by deep learning are more superficial than they initially appear.”
  • Amid the debate, some research groups, start-ups and computer scientists are showing more interest in approaches to artificial intelligence that address some of deep learning’s weaknesses.
  • “We’re not anti-deep learning,” said Yejin Choi , a researcher at the Allen Institute and a computer scientist at the University of Washington.

Producing Better Content Briefs For Writers And For Search

Frase shows SEOs not only what to write but how, says AI content marketer

Anyone who has managed a team of writers knows the importance of providing clear directions before the writing begins. Too little direction and the finished work can end up very different from what you had intended.

The challenge nowadays is that a list of keywords no longer cuts it as the basis for a creative brief. What may have worked in SEO five years ago doesn’t fly today. Instead, creative briefs must go deep on background research.

Consider how much Google’s understanding of semantic search has evolved in recent years

Google is no longer looking for specific keywords in your content, but is seeking deeper topic authority. Your creative brief needs to provide a wider understanding of the topic so the writer can incorporate the big picture into the creative process.

This involves doing background research as part of composing a brief.  And doing research well takes time. At least that is the conventional wisdom.

Enter Fernando Nikolic, a digital marketer using AI to produce research-driven content briefs

Fernando Nikolic manages a global team of writers from his base in the German capital Berlin. He deeply understands the need for solid topic research and for translating what he finds into robust article outlines for his team. In fact, Fernando frequently blogs about the importance of incorporating AI-driven tools in content marketing. And Frase has proven invaluable to Fernando in running his editorial operation.

Fernando has his content process down to a fine art

It wasn’t always the case. With previous research tools, Fernando ended up with a list of topics but no clear direction about how to craft the article. “These tools for content marketers only give you the keywords,” he observes.

Automatic summarization and topic modeling to speed up the research process

With Frase, it’s a different proposition. Using SEO solution CanIRank, Fernando begins by researching key terms across subject areas such as AI and cryptocurrency. Turning next to a dedicated Slack channel (the aptly named #KeywordFriday), he gives his international team of writers a chance to select the topics that most appeal to them.

From there, Fernando uses Frase to generate a research-driven content brief for each topic. At its core, Frase’s AI-powered word processor provides two main components:

The Frase Research Assistant (right side panel of the interface) summarizes and recommends topics by analyzing the most relevant search results for a given theme.  For each source, Frase’s summaries provide the following structure:

In addition to summaries, Frase allows Fernando to explore the most salient topics across top search results. For example, for the theme “artificial intelligence for content marketing”, these are the related topics Frase encourages you to research:

Fernando uses both Frase’s summarization and topic modeling to compose a final research document, which is ultimately shared with his writers. “This is where Frase revolutionizes the whole game,” he says of his finely-tuned process.

The value to Fernando and his team is tangible. Frase’s ability to summarize articles in a user-friendly and intuitive way that immediately extracts the content’s core elements has proven key to fast, high-value editorial research and content creation.

Other content optimization tools fall short on value and usability, Fernando notes. “Ever since I discovered Frase I stopped looking for alternatives. Frase does everything I need.”

Better creative briefs for greater productivity, stronger commercial outcomes

The more deeply researched your creative brief, the better the final content is likely to be. Writers appreciate the additional context and the stronger editorial direction. Content managers value the extra control it provides around quality.

And that’s not all. Not only do the search engines reward deep, differentiated content that displays topic authority. The process efficiencies generated by Frase’s AI summarization technology will expedite areas of your editorial workflow that previously dragged.

Deeper, more relevant content. Your bottom line will thank you for it.

 

61% of Americans don’t believe AI can fix the Fake News problem, yet.

At Frase we are constantly thinking about the future of content creation, and how AI will play a role in making us a more informed society. This post discusses what the general public thinks about AI as a possible tool to combat fake news. It also explores possible alternatives.

In partnership with Lucid, the leading Human Answers Platform, we surveyed a demographic balanced audience in the US with the question: “can Artificial Intelligence (AI) fix the Fake News problem?”

Now, let’s give more background to the Fake News controversy. Around the 2016 US presidential election, political scientists from Princeton University reported that 1 in 4 Americans visited a fake news site. Considering that 62 percent of Americans get their news from social media, platforms like Facebook or Twitter should play a prominent role in combating the Fake News epidemic.

When Mark Zuckerberg told Congress that Facebook would use artificial intelligence to detect fake news posted on the social media site, he wasn’t particularly specific about what that meant or how feasible that would be. We know Facebook is currently working on a number of initiatives to mitigate the fake news problem, including an increased focus on content from friends and family as it relates to Facebook’s News Feed.

Most people don’t believe AI is the solution to Fake News

Based on our survey we can conclude that the majority of Americans don’t believe AI is ready to tackle the problem yet. That said, it is worth looking at some innovations occurring in the space. In fact, I could argue that the proliferation of fake articles on the Internet plays in favor of AI’s chances of learning how to detect fake stories.

After doing some research, I discovered that fake news can generally fall into these different categories:

  • entirely false or fake information;
  • discussion of real events with inaccurate interpretations;
  • pseudoscientific articles that pretend to have a research foundation;
  • satirical content;
  • articles that include a mix of unreliable opinions from online forums or social media platforms, most notably Twitter.

My initial reaction was that an AI system could be trained to classify articles into these categories. But of course, the system would not be perfect and would have a high degree of bias based on who labels the training dataset. In addition, a given website might have a combination of real and fake information, which makes the data collection more time consuming.

A company called Fakebox found an interesting solution. Their answer isn’t detecting fake news, but detecting real news! Real news is factual and has little to no interpretation. There are also plenty of reputable sources to build a dataset from.

In the words of Fakebox’s co-founder:

We trained a machine learning model that analyzes the way an article is written, and tells you if its similar to an article written with little to no biased words, strong adjectives, opinion, or colorful language. It can have a hard time if an article is too short, or if it’s primarily comprised of other people’s quotes (or Tweets). It is not the end all solution to fake news. But hopefully it will help spot articles that need to be taken with a grain of salt.

Another interesting tool is FakeRank, like Google’s PageRank for Fake News detection, only that instead of links between web pages, the network consists of facts and supporting evidence. It leverages knowledge from the Web with Deep Learning and Natural Language Processing techniques to understand the meaning of a news story and verify that it is supported by facts.

Unfortunately, AI can be also used for creating even more fake content

Here are few quotes that illustrate the dark side of AI in content creation:

  • “Video Machine-learning experts have built a neural network that can manipulate videos to create fake footage – in which people appear to say something they never actually said.” (theregister.co.uk)
  • “So incredible, that of the 1,600 reviews posted on the book’s Amazon page in just a few hours, the company soon deleted 900 it suspected of being bogus: written by people who said they loved or hated the book, but had neither purchased nor likely even read it.” (scientificamerican.com)
  • “In a paper published this month, the researchers explained their methodology: Using a neural network trained on 17 hours of footage of the former US president’s weekly addresses, they were able to generate mouth shapes from arbitrary audio clips of Obama’s voice.” (qz.com)
  • “Artificial intelligence can use still images of people and turn them into FAKE video – and even put words in their mouths The system takes an image of a person as well as an audio clip to create a video It identifies facial features in photos using algorithms that recognize the face. As audio plays, it manipulates the mouth so it looks like the person is speaking. With improvement, the researchers say the AI could make fake videos seem real. The system could eventually render video evidence unreliable in court cases.” (dailymail.co.uk)

So…could blockchain come to the rescue?

Besides AI, there is another technology that could play a major role in solving the fake news problem: blockchain. I found two projects that propose very similar approaches, as per their websites:

  • Eventum makes it easy for people to get paid for reporting on real-time events and information around them, while developers can get any data they want in a cheap, fast and secure data feed. It uses a ‘wisdom of the crowd’ principle and ‘blockchain-as-a-court-system’ on the Ethereum network to solve problems, including fake news, eSports data extraction and real-time feedback to AI algorithms.
  • PUBLIQ, a non profit foundation, launched a decentralized content platform that offers merit based, real-time, and instant remuneration to authors from all over the world, in order to combat fake news and biased reporting. The PUBLIQ Foundation builds a trust based ecosystem that is operated by authors, journalists, bloggers and advertisers around the world and encourages them to share their written perspectives in a safe and encrypted way.

Take aways

The survey results are not surprising considering that even Facebook can’t articulate a clear plan as to how AI will solve the fake news problem. It seems like instead of attempting to find out if each story is factual, we might be better off examining its source and distributors. This is where blockchain might be able to help. In addition, AI could probe valuable in recognizing if the content of a story is very similar to that of other stories that had proven to be fake.

AI Daily – predicting death, tracking lions with computer vision, what are adversarial attacks?

In today’s AI Daily, we discuss how researchers at Google Brain develop tools to predict when a patient will die. We also picked a story on how computer vision is helping zoologists track animals by analyzing aerial pictures. Finally, we feature a story regarding cybersecurity, and how “adversarial attacks” make small modifications to input data to make machine learning models behave in ways they are not supposed to.

Google Is Training Machines to Predict When a Patient Will Die (bloombergquint.com)

  • The harrowing account of the unidentified woman’s death was published by Google in May in research highlighting the health-care potential of neural networks, a form of artificial intelligence software that’s particularly good at using data to automatically learn and improve.
  • Google had created a tool that could forecast a host of patient outcomes, including how long people may stay in hospitals, their odds of re-admission and chances they will soon die.
  • As much as 80 percent of the time spent on today’s predictive models goes to the “scut work” of making the data presentable, said Nigam Shah, an associate professor at Stanford University, who co-authored Google’s research paper, published in the journal Nature.
  • Dean’s health research unit — sometimes referred to as Medical Brain — is working on a slew of AI tools that can predict symptoms and disease with a level of accuracy that is being met with hope as well as alarm.

Artificial intelligence used to identify and count wild animals (digitaljournal.com)

  • The artificial intelligence was able to screen 3.2 million images in a matter of weeks and determine which of 48 different species of animal was present in a given image, and also the number of each animal and also the activity being performed, such as eating, sleeping, moving and so on.
  • Discussing the development, lead researcher Professor Jeff Clune said : ” ” This technology lets us accurately, unobtrusively and inexpensively collect wildlife data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology and animal behavior into ‘big data‘ sciences.”

What Are Adversarial Attacks? How Hackers Force AI to Make Mistakes (dailydot.com)

  • A few months ago, a group of MIT students published a proof-of-concept that showed how, with a couple of tweaks, they were able to trip up Google’s AI while maintaining the natural appearance of a toy turtle to the human eye.
  • Another study by researchers at the University of Michigan, the University of Washington, and the University of California, Berkeley showed that by making small additions to stop signs, they could render them unrecognizable to the machine-vision algorithms that enable self-driving cars to understand their surroundings.
  • “Raising awareness in the tech community that almost all systems need to be designed with security in mind, and that security should be taken into consideration from step one, is going to be as much a challenge as an opportunity in the next years,” he says.
  • Another method proposed by Google’s researchers is the use of generative adversarial networks (GAN ), an AI technique that enables AI systems to create their own high-quality data.

AI Daily: robots run Chinese factory, meet Amazon’s smart camera, can AI imagine places?

In today’s AI Daily, we look at how a Chinese company with only 4 employees handles 200,000 orders per day thanks to robots. We also explore a few stories about computer vision featuring innovations from Amazon and DeepMind.

Chinese e-commerce company JD.com is running a robot-driven warehouse (dailymail.co.uk)

  • Chinese e-commerce company is running a nearly autonomous warehouse with almost ZERO human employees Chinese e-commerce firm JD.com has built a nearly autonomous warehouse The logistics facility is near Shanghai and has a sophisticated network of automated machinery that scans packages, delivers them to trucks and more The factory’s only four employees are on hand to monitor the automated robots JD.com has already launched a network of autonomous delivery drones
  • Chinese e-commerce giant JD.com has constructed a fulfillment center that handles as many as 200,000 orders each day – and it only has four employees.
  • We will automatically post your comment and a link to the news story to your Facebook timeline at the same time it is posted on MailOnline.

The Amazon’s DeepLens camera is available in the U.S.for $249 (digitaltrends.com)

  • From Google to Snapchat, artificial intelligence is expanding the camera’s prowess and Amazon wants to give developers the chance to learn about deep learning and computer vision.
  • Using AWS DeepLens software and a computer, users can choose from project templates for a more guided learning experience or choose to design their own software from scratch.
  • The templates or sample project walks developers through how the project works so they can build hands-on experience to integrate deep learning into their own projects.

Google’s DeepMind develops AI that can render 3D objects from 2D pictures (venturebeat.com)

  • “Much like infants and animals, the GQN learns by trying to make sense of its observations of the world around it,” DeepMind researchers wrote in a blog post.
  • “In doing so, the GQN learns about plausible scenes and their geometrical properties, without any human labeling of the contents of scenes … [T]he GQN learns about plausible scenes and their geometrical properties … without any human labeling of the contents of scenes.”
  • To train the system, DeepMind researchers fed GQN images of scenes from different angles, which it used to teach itself about the textures, colors, and lighting of objects independently of one another and the spatial relationships between them.

This creepy AI can predict your next move minutes in advance (dailymail.co.uk)

  • Researchers have taught an AI to recognise patterns in people’s actions, allowing it to accurately predict the next move in a sequence minutes in advance.
  • Researchers told the AI what was happening in the video for the first 20 per cent of the clip – and then asked the algorithm to predict the next action before it took place
  • Gall and his colleagues want the study, which will be presented at the Conference on Computer Vision and Pattern Recognition in Salt Lake City on June 19, to be understood as a first step into the new field of activity prediction.
  • ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

AI Daily – Google says no to Pentagon, Samsung launches AI fund, the future of retail

In today’s AI Daily, we look at Google‘s decision of stepping off a contract with the Pentagon after 4,000 employees signed a petition to end the project. We also feature Samsung‘s new AI-focused fund and Microsoft‘s new retail technology that will eliminate the need of cashiers in retail stores.

Google’s retreat from AI contract is unlikely to cool the Pentagon’s love for Silicon Valley (latimes.com)

  • Google’s decision reportedly came after almost 4,000 employees (Alphabet, Google’s parent company, has a total workforce of 85,000) signed a letter asking chief executive Sundar Pichai to end the contract for Project Maven and stop all work in “the business of war.” At least a dozen employees resigned over the issue, according to Bloomberg.
  • “If you ask the fundamental question are the linkages between the valley and the Pentagon deepening all the time, the answer is a clear yes,” said Paul Bracken, professor of management at Yale University who has studied the role of technology in defense and has served as a consultant to the Defense Department.
  • “Many Googlers think that getting the deployment of artificial intelligence right is extremely important and should begin with getting it right,” he said, and that a military application should be avoided until the technology was very well understood.
  • “It’s a core issue for both industry and the government to understand and figure out ways to ensure that these artificial intelligence applications that are being used for national security are reliable and can support accountability,” Hunter said.

Samsung launches AI-focused fund (pehub.com)

  • MOUNTAIN VIEW, Calif.–Samsung NEXT, the group helping to transform Samsung by working with entrepreneurs and innovators to build, grow and scale software and services, today announced Samsung NEXT Q Fund to help enable the full potential of AI’s future.
  • With Q Fund, Samsung NEXT Ventures will have the flexibility to invest in non-obvious, forward-thinking approaches to AI instead of the applied AI technologies we see in the market today.
  • “And we want to invest in the people and teams who will try new approaches to lay the groundwork for what AI will be.

Why AI-powered, cashier-free stores are the unavoidable future of retail (techrepublic.com)

  • Microsoft is developing a system that tracks what items shoppers in a store place in their carts, eliminating the need for cashiers and putting the company in direct competition with Amazon in the retail space, Reuters reported Thursday.
  • The big takeaways for tech leaders: Microsoft is reportedly developing a system that tracks what items shoppers in a store place in their carts, eliminating the need for cashiers in stores.
  • This technology would put Microsoft in direct competition with Amazon Go stores, and shows how the future of retail is likely to be automated.

AI Daily – Google Research in Ghana, exploring alien trade deals, creating the perfect FIFA team

In today’s AI Daily, we look at how Google is opening an AI center in Ghana to tackle challenges in agriculture, healthcare and education. We also picked a story about a potential trade deal with aliens where AI is mentioned in the context of human ethics. In other news, we feature use cases of AI in recruiting and soccer, along with the announcement of NVIDIA opening a research center in Toronto, which continues to position itself as a major AI hub.

Google will open an AI center in Ghana later this year, its first in Africa (venturebeat.com)

  • “Events like Data Science Africa 2017 in Tanzania, the 2017 Deep Learning Indaba event in South Africa, and follow-on IndabaX events in 2018 in multiple countries have shown an exciting and continuing growth of the computer science research community in Africa.”
  • This is Google’s first center devoted solely to AI research in Africa, and as far as we can tell, the first of any tech giant (beating Apple, Facebook, and Microsoft to the punch).
  • “[W]e’re excited to combine our research interests in AI and machine learning and our experience in Africa to push the boundaries of AI while solving challenges in areas such as healthcare, agriculture, and education,” Cisse and Dean wrote.

Our DNA Might Be the Most Valuable Thing in a Trade Deal With Aliens (motherboard.vice.com)

  • Any alien civilization that makes contact with Earth is probably way more advanced than us, and wouldn’t be interested in our technological relics, he said.
  • After all, of the more than 3,700 planets we’ve discovered outside our solar system, only about 50 of those have even the remotest chance of housing life.The organic molecules on Mars that NASA discovered earlier this month could have been from a volcanic explosion or meteor rather than a googly-eyed Martian life form.
  • “They might, for example, make diseases targeted to us.” Daniel Ross, a PhD candidate from the University of Illinois at Urbana-Champaign who’s interested in how we might decode alien languages, said it’s not so much about finding definitive answers to questions yet (we don’t have the means to), but rather theorizing on potential scenarios.
  • His presentation at the conference was among those put on by Messaging Extraterrestrial Intelligence (METI) International and the Search for Extraterrestrial Intelligence (SETI) Institute—international groups of scientists trying to figure out how to write and send messages out into the universe, and how those messages could be interpreted—for better or worse.

Plum uses AI to hire people ‘that never would have been discovered through a traditional hiring process’ (venturebeat.com)

  • According to Plum, hiring managers make conclusions based on an applicant’s name in 0.2 seconds and decide the outcome of an interview in 10 seconds, which can lead to hires that have a 46 percent chance of failing or leaving within 18 months.
  • Also new are collaborative features that make it easier for hiring managers and recruiters to fine-tune role requirements for a given job.
  • She explains that Plum measures a candidate’s potential — separate from the job they’re applying to — and matches them “with the environment that will allow them to be the best versions of themselves.
  • Recruiters see applicants ranked by their Plum Match score in a summary view, along with highlights such as “great ability to persuade” and “extraordinary decision-making skills” — a model that MacGregor said is ideal for roles with evolving, and sometimes nebulous, requirements.

NVIDIA Opening AI Research Lab in Toronto | NVIDIA Blogs (blogs.nvidia.com)

  • The AI Research Lab will be led by computer scientist and University of Toronto professor Sanja Fidler.
  • While taking up the helm as director of AI research for the NVIDIA Toronto office, she’ll continue in her role as an assistant professor in the department of computer science at the University of Toronto.
  • Both the Toronto and Seattle research labs are part of the 200-person strong NVIDIA Research team, which works from nearly a dozen worldwide locations and focuses on pushing the boundaries of technology in machine learning, computer vision, self-driving cars, robotics, graphics and other areas.

Using FIFA18 Stats and AI to find the Perfect Signing (medium.com)

  • It could be replacing that key player that just left to a larger team, or signing a 16-year old rising star in a bet for it to be the next Messi for example.
  • To make this process easier, I trained machine learning models with FIFA18 data, and then used them to create quality shortlists of players that match characteristics of a given role model player.
  • Imagine how great would it be for a scout to find all 17-year old players with similar characteristics to Neymar at that age without having to travel all over the world to discover them.

AI Daily – translating art reviews, meet Russia’s self-driving cars, is AI following you?

In today’s AI Daily, we look at the ways researchers are applying artificial intelligence to art reviews, we take a tour of Yandex‘s plans to launch self-driving cars on the streets of Moscow, and we feature an AI project about wireless devices sensing people’s movements through walls.

Translating Art Reviews Using Artificial Intelligence (medium.com)

  • In the same way that a thesaurus gives you no indication of how a sequence of words is related to another sequence but only how individual words are related to other individual words, our “similarity” measure gives us no understanding of how a sequence of words is related to another.
  • Clearly, these two sentences are very similar and the individual vectors of each of the words, as defined by GLoVE, would be the same except for “Oli” and “John” which would be very “similar” by our measure as they are both proper nouns and likely found in similar contexts relative to other words.
  • This means that when the network is introduced to a new sequence of words and asked to predict the final word it has learned something about the structure of sentences during training and can take advantage of this.

Moscow Mules: These Russian Self-Driving Cars Are Assertive on Chaotic Streets (caranddriver.com)

  • From behind us, I hear the first of many honks on this three-mile demonstration ride, the first in Yandex’s car for any Western journalist.
  • “The problem in Moscow is that there are certain areas where they spoof GPS, where there are special radio frequencies emitted that tell the system that you are not where you are,” said Polishchuk.
  • “We are not going to fight with Google face to face in old areas like search, but in areas where machine learning is a core technology, like self-driving cars, there we can compete.”

Forget X-Ray Vision. You Can See Through Walls With Radio (wired.com)

  • But instead of bouncing off planes and returning to the ground, the signal here travels through the wall, bounces off a human (we’re full of water, which radio signals have a hard time penetrating ), and comes back through the wall and into a detector.
  • “You’re not just receiving a reflection from the human body, you’re receiving reflections from everything,” says MIT CSAIL computer scientist Dina Katabi , coauthor on a new paper describing the process.
  • “The reflection from the wall will be much much bigger than the reflection from the signal that traversed the wall and reflected off the human body and traversed the wall again back toward you.”
  • “We use annotations in the image as the teacher for the neural network that is working just with radio signal.” The AI trained on video could then be matched to the mess of radio signals, allowing it to associate those labeled body parts with the subtle radio reflections coming back through the wall.