13 blockchain projects to help content creators and freelancers

Independent content creators are struggling to monetize their content at acceptable rates. Advertisers battle with fraud, ad blockers and declining advertising revenue. Publishers are finding it hard to monetize content through paywalls.

Blockchain promises solutions to remove unnecessary middlemen between freelancers and employers, prevent plagiarism, provide efficient escrow services and allow new types of reward programs.

Given the growth of the freelance economy, it seems like these solutions – many of them early-stage startups – are addressing a large, growing market of content creators.

Source: Dorie Clark, Harvard Business Review


If you are one of the 81 million people who use WordPress to publish content, you’ll soon be able to automatically timestamp your works within the WordPress content management system using the new Po.et plugin. (bitcoinmagazine.com)

February 8, 2018 marked the launch of Frost , an open API and set of developer tools from Po.et that will enable content publishers and developers to more easily register their creative works on the blockchain. The new API will enable integrations and decentralized applications, including the WordPress plug-in. Developers can get instructions on how to make an account, create an API key, read developer documentation and access the javascript library at the Frost website. (bitcoinmagazine.com)


The media sharing and advertising platform Snapparazzi has announced a release of its minimum viable product (MVP) of a blockchain-based platform. The app aims to allow everyone with a smartphone “to become a reporter” or a content creator. The user takes footage or a photo of newsworthy events with their smartphone and shares it using the platform. The interested buyers — TV, newspapers, radio, etc. — pay for the content in fiat currency. The user will in turn be paid in SnapCoin, the platform’s token for their contribution. The platform also targets content creators and says they can get paid substantially more with Snapparazzi compared to Youtube for creating or watching content. (cointelegraph.com)


Currently, the most popular decentralized content platform is Steemit , a blogging and social networking website with over one million users. The platform rewards publishers by the popularity of their content as determined by upvotes. The Steemit ecosystem involves three types of tokens, namely, steem, steem power and steem dollars. (nasdaq.com)

When publishers create popular content, they earn Steem dollars, a stable currency that can be sold on the open market at any time. The steem token creation depends on the interaction in the Steemit platform and is usually distributed to content creators and curators either as steem power or steem dollars. Just like bitcoin, people can buy the crypto for speculation purposes. Steem and steem dollars can be converted to steem power to increase voting power. (nasdaq.com)

ASQ Protocol

Another project with an even bigger user base is the ASKfm-backed ASQ Protocol . Just like Steemit, the platform enables content creators to get paid for the value of their work rather than the revenue they generate from advertisements placed on their content. With this project, consumers can order their desired content from ASQ-supported platforms and pay with ASQ tokens. Brands can also sponsor content and reward users who engage with it. (nasdaq.com)


WOM is the creation of YEAY – an App that provides teens and young adults with a location to shoot and share videos featuring streetwear styles. The app makes the videos shoppable by incorporating affiliate links. Brands may benefit from user-generated product recommendations and creators earn WOM Tokens as rewards for the value driven through their videos. To-date YEAY reports it has received $7 million in seed funding from investors including the former COO of Airbnb, the former CEO of Deutsche Telekom, Grazia Equity, Mountain Partners, and others. (crowdfundinsider.com)


You42 says creators will be offered “dynamic commerce capabilities” enabling them to sell content through their own page – effectively creating a “personalized marketplace” that meets their meets. They would generate an income through the U42 token – a cryptocurrency that has been specially created for the platform. (cointelegraph.com)

Users and creators would also be able to use the platform’s internal currency, UCoin – which can be earned for certain activities or purchased via U42 tokens – to access premium content or tip content creators for work they appreciate. (cointelegraph.com)


A new promising project called AC3 hopes to make life easier for content creators and educators. It’ll allow them to share their work with their audience directly, and use a more secure and transparent system to avoid plagiarism. (newsbtc.com)

Unlike many other blockchain companies, AC3 didn’t go down the ICO route when funding their project and chose to focus instead on building the technology. The platform is straightforward: fans and followers pay with AC3 tokens to access content, and creators can also sell their content for these same tokens. They can be used as currency to buy things like design and programming courses within the platform. (newsbtc.com)

Microsoft and Ernst & Young

Microsoft and Ernst & Young (EY) announced the launch of a blockchain solution for content rights and royalties management on Wednesday. The blockchain solution is first implemented for Microsoft’s game publisher partners. Indeed, gaming giant Ubisoft is already experimenting with the technology. After successful testing, Microsoft and EY hope to implement the solution across all industry verticals which require licensing of intellectual property or assets. (thenextweb.com)


The model implemented by the C3C blockchain, for instance, creates a direct consumer-to-creator network, replacing intermediaries in the process. Artists, writers, and musicians currently share somewhere from 10% to 20% of all the revenue they make on a particular online platform. The amount goes markedly up when you add other expenditures such as payment processing fees, bank charges and value added taxes. A payment processor, for instance, will take some 3% to 6% of any payment made to a content creator while cryptocurrency payments eliminate these charges. (c3c.network)

Blockchain provides complete control not only over the copyright but also empowers creators to price their content dynamically and perform micro-metering activities. Dynamic pricing is among the greatest benefits of blockchain, enabling prices adjusted by demand, advertiser-support, and many more factors. (c3c.network)


The community-owned open source social networking platform Minds has amassed one million users and has recently launched a cryptocurrency reward program based on the ethereum blockchain for all users on the platform. (zdnet.com)

Minds is also introducing a direct peer-to-peer advertising tool that allows users to offer tokens to one another in exchange for shares of specific content. (Image: Minds) (zdnet.com)


Escrow services, which hold money independently from two parties until all terms are satisfied, could help alleviate some of the stress between freelancers and employers. StratusCore added a Digital Escrow service to its platform — which was built using blockchain technology — last year. With the service, employers and freelancers can agree to the terms of a project and the necessary benchmarks or milestones that must be met to ensure it is completed on time. (pymnts.com)

When an agreement is reached, an employer deposits the funds for the project into the StratusCore Digital escrow account. Funds are disbursed from the escrow account directly into a freelancer’s preferred bank account once the necessary deliverables are met and the digital assets are uploaded to the platform. This occurs within one to two days after the employer digitally signs off on the delivered asset and the escrow funds are released. (pymnts.com)

ARA Blocks

Among other factors, the sometimes necessary presence of middlemen importantly shrinks and limits the profits of content creators. Luckily for them, there is now a new platform that promises to make things easier, more profitable and secure, and all this thanks to the powerful blockchain technology: meet ARA Blocks . (technewsbeat.com)

With this platform, content creators will be able to sell and distribute their content directly to their targets, thus eliminating the need for the aforementioned middlemen and effectively increasing their revenue. And, given that it is built using Ethereum (which uses blockchain ), all this is done with all the technology’s landmarks, namely security and traceability. (technewsbeat.com)

ACG Networks’s DApp

ACG Networks’s DApp store would allow content creators of the digital content industry around the world to develop and release their own DAPPs and Ethereum based smart contracts. Since those DApp considered to be tokenized and hence can proliferate within ACG Network public blockchain for voting and forecasting purpose by accessing smart contracts (Ethereum). For an application to be viewed as a DApp with regards to Blockchain , it must meet the accompanying criteria: Application must be totally open-source It must work self-governing, and with no content controlling the larger part of its tokens. (medium.com)

Can AI make you a faster writer?

Sorry for the spoiler, but the simple answer to the question is yes — artificial intelligence can absolutely make you a faster writer. There’s little doubt that AI-driven tools and platforms are able to speed up most aspects of the editorial creation process.

Just think how easy it is nowadays to auto-correct text, to validate facts, or search for synonyms online. It used to take serious time and effort to thumb through dictionaries, encyclopedias, and thesauruses. This information, digitally stored in the cloud, is now delivered at lightning speed through our devices.

So the question isn’t whether AI can save you time in the editorial process — it’s more about how artificial intelligence impacts different stages of the content creation process, and what you do with the time it frees up. And the research on this point is interesting.

Reinvesting the gains 

For the most part, content creators appear to be investing time savings back into the editorial process—rather than speeding up and finishing faster, bloggers are spending longer on their blogs than ever before.

The data is clear. Results from Orbit Media’s 2017 survey of bloggers found the average time spent creating a blog post was 3 hours 20 minutes, an increase of approximately 40% since 2014. By a similar measure, the average blog post in the 2017 survey came in at 1142 words, also increasing by around 40% from three years earlier. The heyday of the short post is gone.

So the time savings derived from advances in technology—if only from improvements in semantic search and word processor functionality—are being invested back into the content on a surprisingly consistent basis. But why?

The gravitation toward longer blogs in recent years has certainly something to do with search engines increasingly rewarding extended content. In a city of tall buildings, you need to build your structure higher than the others to stand out.

One further thesis is that blog posts are getting longer because it has simply become easier to compose a lengthy article than it was a decade ago.

Think about it for a moment—importing new material and adding visual media into your word processor is “plug and play” nowadays in most content management systems. And for every new element you add, there’s an opportunity for further comment. Bloggers are continually encouraged by the system to make their pieces longer.

Compounding this, nobody is really telling the writer when to stop. Sure, there might be a word count of sorts. But, unlike print, there is no fixed endpoint for digital articles. Once you start writing, you could theoretically go on forever. Not that we would advise that, of course.

Whatever happened to writer’s block? 

Once upon a time, writers would script movies and pen books about not being able to write. French novelist Flaubert used to famously agonize for hours over a single word. Writer’s block was a thing even not so long ago, but we seem to be hearing less about it these days.

Why might that be? For one, the process of getting started on your creative work is increasingly painless. Numerous topic analysis tools are available to help get your content research underway, while advances in natural language processing are enabling machine learning platforms to generate deep research at the click of a button. Editorial AI is like a personal assistant, gently nudging, suggesting, and prodding you along the way.

It’s also easier than ever to work with others on editorial projects. If the typewriter was an island technology, the personal computer and the internet moved everyone a step closer together.

Since the advent of cloud technology, the floodgates have opened. Digital solutions support every aspect of creative collaboration from ideation and scheduling through development and review.

There has been a similar proliferation in the available tools for optimizing, repurposing, scaling, personalizing, and distributing content both within domestic markets and internationally across languages. As part of the ongoing explosion of martech platforms, AI-led tools can schedule social posts for optimal times, uncover traits of top-ranking content,  AB-test landing pages, and manage paid search and paid social campaigns. They can offer live recommendations on improving content performance, assist with design and pagination, and automatically recognize images. IBM Watson, for one, is on a mission to hoover up all the unstructured data on the internet and give it form.

What this all means is that it is easier than ever to produce insightful, engaging content supported by AI. At the same time, this has had the parallel effect of making it even more difficult for your content to stand out from the crowd. Experts have talked for several years now about the requirement for 10x content and the almost Herculean levels of effort needed to differentiate your editorial.

Human versus machine

And it is not as though creative jobs are beyond the reach of the machines. Many aspects of the content and copywriter’s job are already subject to some degree of automation.

The Associated Press has long used AI writing technology to produce data-heavy sports and financial reporting, and NLP firms are working hard and fast to offer organizations AI solutions for automated report writing.

Or look at advertising copy. In launching its proprietary AI copywriter, retail giant Alibaba insisted that the introduction of the technology would allow advertising creatives to spend more time on higher-end analysis and tasks. “Copywriters will shift from thinking up copy—one line at a time—to choosing the best out of many machine-generated options, largely improving efficiency,” the company said in a statement.

Nancy A. Shenker, founder and CEO of marketing firm theONswitch, expects AI technology to play a growing role in content development in the coming years. “My estimate is that 50% of all content will be developed by machines, with oversight and editing by humans,” , she told EContent Magazine. “Artificial intelligence will recommend topics based on trends, gather facts—and validate them—and assemble very tight posts and suggested graphics based on those combinations.” Flesh and blood creatives will add “soul and humor as needed,” according to Shenker.

Can machines get creative?

The march of AI is inevitable. 2017 research from McKinsey found that across the global workforce approximately 50% of “current work activities are technically automatable by adapting currently demonstrated technologies.”

As always, the critical question is how we harness advances in technology to our collective betterment or ill.

For now, the reality is that humans are not getting sidelined in the creative process. Some might say we are even entering a golden age of AI-assisted creative.

While AI is great at aggregating data, identifying patterns, and generating recommendations, it is not so good yet at coming up with original, well-edited, and emotionally engaging material that helps your content stand out.

The oversight of strategy in content development—why we create what we are create and how the content maps to broader commercial objectives—remains a fundamentally human domain, at least for the time being.

Similarly, the ability to induce an emotional response from a piece of content, or to decide whether something is fit or not to publish, continue to rely on human judgement.

Throw into your content some proprietary research, take a standpoint on your topic, and inject some voice and style, and you’re doing things that machines still find difficult.

The interplay between human and machine is fluid, fast-moving, and fascinating. An architect who uses 4-D CAD visualization is still 100% an architect and a carpenter who deploys AI to determine the best way to sequence a home-building project based on local weather conditions is still 100% a carpenter. They are just using better tools than were previously available.

And so it goes with writers and other creative professionals. Write faster, write harder, write better.

20 Applications of Automatic Summarization in the Enterprise

Summarization has been and continues to be a hot research topic in the data science arena. While text summarization algorithms have existed for a while, major advances in natural language processing and deep learning have been made in recent years. Many internet companies are actively publishing research papers on the subject. Salesforce has published various groundbreaking papers presenting state-of-the-art abstractive summarization. In May 2018, the largest summarization dataset as revealed in a projected supported by a Google Research award.

While there is intense activity in the research field, there is less literature available regarding real world applications of AI-driven summarization. One of the challenges with summarization is that it is hard to generalize. For example, summarizing a news article is very different to summarizing a financial earnings report. Certain text features like document length or genre (tech, sports, finance, travel, etc.) make the task of summarization a serious data science problem to solve.  For this reason, the way summarization works largely depends on the use case and there is no one-size-fits-all solution.

Summarization: the basics

Before diving into an overview of use cases, it is worth explaining a few basics around summarization:

There are two main approaches to summarization:

  • Extractive summarization: it works by selecting the most meaningful sentences in an article and arranging them in a comprehensive manner. This means the summary sentences are extracted from the article without any modifications.
  • Abstractive summarization: it works by paraphrasing its own version of the most important sentence in the article.

There are also two scales of document summarization:

  • Single-document summarization: the task of summarizing a standalone document. Note that a ” document” could refer to different things depending on the use case (URL, internal PDF file, legal contract, financial report, email, etc.).
  • Multi-document summarization: the task of assembling a collection of documents (usually through a query against a database or search engine) and generating a summary that incorporates perspectives from across documents.

Finally, there are two common metrics any summarizer attempts to optimize:

  • Topic coverage: does the summary incorporate the main topics from the document?
  • Readability: do the summary sentences flow in a logical way?

Use cases in the enterprise:

These are some use cases where automatic summarization can be used across the enterprise:

1. Media monitoring

The problem of information overload and ” content shock” has been widely discussed. Automatic summarization presents an opportunity to condense the continuous torrent of information into smaller pieces of information.

2. Newsletters

Many weekly newsletters take the form of an introduction followed by a curated selection of relevant articles. Summarization would allow organizations to further enrich newsletters with a stream of summaries (versus a list of links), which can be a particularly convenient format in mobile.

3. Search marketing and SEO

When evaluating search queries for SEO, it is critical to have a well-rounded understanding of what your competitors are talking about in their content. This has become particularly important since Google updated its algorithm and shifted focus towards topical authority (versus keywords). Multi-document summarization can be a powerful tool to quickly analyze dozens of search results, understand shared themes and skim the most important points.

4. Internal document workflow

Large companies are constantly producing internal knowledge, which frequently gets stored and under-used in databases as unstructured data. These companies should embrace tools that let them re-use already existing knowledge. Summarization can enable analysts to quickly understand everything the company has already done in a given subject, and quickly assemble reports that incorporate different points of view.

5. Financial research

Investment banking firms spend large amounts of money acquiring information to drive their decision-making, including automated stock trading. When you are a financial analyst looking at market reports and news everyday, you will inevitably hit a wall and won’t be able to read everything. Summarization systems tailored to financial documents like earning reports and financial news can help analysts quickly derive market signals from content.

6. Legal contract analysis

Related to point 4 (internal document workflow), more specific summarization systems could be developed to analyze legal documents. In this case, a summarizer might add value by condensing a contract to the riskier clauses, or help you compare agreements.

7. Social media marketing

Companies producing long-form content, like whitepapers, e-books and blogs, might be able to leverage summarization to break down this content and make it sharable on social media sites like Twitter or Facebook. This would allow companies to further re-use existing content.

8. Question answering and bots

Personal assistants are taking over the workplace and the smart home. However, most assistants are fairly limited to very specific tasks. Large-scale summarization could become a powerful question answering technique. By collecting the most relevant documents for a particular question, a summarizer could assemble a cohesive answer in the form of a multi-document summary.

9. Video scripting

Video is becoming one of the most important marketing mediums. Besides video-focused platforms like YouTube or Vimeo, people are now sharing videos on professional networks like LinkedIn. Depending on the type of video, more or less scripting might be required. Summarization can get to be an ally when looking to produce a script that incorporates research from many sources.

10. Medical cases

With the growth of tele-health, there is a growing need to better manage medical cases, which are now fully digital. As telemedicine networks promise a more accessible and open healthcare system, technology has to make the process scalable. Summarization can be a crucial component in the tele-health supply chain when it comes to analyzing medical cases and routing these to the appropriate health professional.

11. Books and literature

Google has reportedly worked on projects that attempt to understand novels. Summarization can help consumers quickly understand what a book is about as part of their buying process.

12. Email overload

Companies like Slack were born to keep us away from constant emailing. Summarization could surface the most important content within email and let us skim emails faster.

13. E-learning and class assignments

Many teachers utilize case studies and news to frame their lectures. Summarization can help teachers more quickly update their content by producing summarized reports on their subject of interest.

14. Science and R&D

Academic papers typically include a human-made abstract that acts as a summary. However, when you are tasked with monitoring trends and innovation in a given sector, it can become overwhelming to read every abstract. Systems that can group papers and further compress abstracts can become useful for this task.

15. Patent research

Researching patents can be a tedious process. Whether you are doing market intelligence research or looking to file a new patent, a summarizer to extract the most salient claims across patents could be a time saver.

16. Meetings and video-conferencing

With the growth of tele-working, the ability to capture key ideas and content from conversations is increasingly needed. A system that could turn voice to text and generate summaries from your team meetings would be fantastic.

17. Help desk and customer support

Knowledge bases have been around for a while, and they are critical for SAAS platforms to provide customer support at scale. Still, users can sometimes feel overwhelmed when browsing help docs. Could multi-document summarization provide key points from across help articles and give the user a well rounded understanding of the issue?

18. Helping disabled people

As voice-to-text technology continues to improve, people with hearing disabilities could benefit from summarization to keep up with content in a more efficient way.

19. Programming languages

There have been multiple attempts to build AI technology that could write code and build websites by itself. It is a possibility that custom ” code summarizers” will emerge to help developers get the big picture out of a new project.

20. Automated content creation

” Will robo-writers replace my job?” That’s what writers are increasingly asking themselves. If artificial intelligence is able to replace any stage of the content creation process, automatic summarization is likely going to play an important role. Related to point 3 (applications in search marketing and SEO), writing a good blog usually goes by summarizing existing sources for a given query. Summarization technology might reach a point where it can compose an entirely original article out of summarizing related search results.

Artificial intelligence in SEO and content optimization

This post examines how artificial intelligence is changing SEO and proposes specific techniques for content marketers to adapt accordingly.

In the past five years, Google has introduced two algorithm updates that put a clear focus on content quality and language comprehensiveness. In 2013, Hummingbird gave search engines semantic analysis capability. In 2015, Google announced RankBrain, which marked the beginning of Google‘s AI-first strategy. This means that Google uses multiple AI-driven techniques to rank search results.

As a result, search engine optimization (SEO) has shifted focus from keywords to topical authority. Keyword research is still important, but its role has changed. Simply put, AI systems can now understand way beyond individual keywords. Much like humans do, new systems can understand relationships between topics and develop a contextual interpretation. In other words, AI is learning to read.

Increasingly, marketers are using AI-powered tools to help them reverse engineer the way search engines find the best content. When it comes to research tools, one widely discussed topic is the difference between SEO and content optimization.

SEO runs on keywords, content optimization runs on NLP

SEO has traditionally run on keywords, while content optimization runs on Natural Language Processing (NLP).

Typically, your content strategy will start with a set of important keywords. However, each keyword now has more sophisticated properties than it used to: what user intent does it relate to? what broader topics does it link with? what cluster does it belong to?

Content optimization is all about understanding these additional properties in language. The ultimate goal is to create the most authoritative content for a given query, optimized for both topic breadth (horizontal topic coverage) and depth (how detailed you go into the topic).

We are teaching machines to read

The terms ” topic modeling” and ” latent semantic indexing” have been widely used in the digital marketing and SEO arenas to describe the way semantic search works. It is worth exploring some specific data science techniques that are powering the latest AI-powered tools.

Word vectors:

Word vectorization is a natural language processing (NLP) technique where words and phrases from a vocabulary are mapped to vectors of real numbers. Word vectors typically have around 200 dimensions, meaning each word gets a position in a 200 dimension space. Placing words in a multi-dimensional vector allows us to perform similarity comparisons, among other operations. The sum of word vectors may also be used to calculate document vectors. A typical challenge in word vectors is ambiguity: the meaning of a word (” apple” vs ” Apple” ) can be embedded into the same vector location. More advanced vectors, called ” sense embeddings” , solve this problem by interpreting each version of the word differently. For example, a sense embedding might be able to position ” apple” the fruit and ” Apple” the company in very different positions within the vector space.

word vector representation
Simple 2-dimension vector representation of a small house-related vocabulary.

Named entity recognition (NER):

This technique seeks to locate and classify named entities in text into pre-defined categories such as concepts, organizations, locations or people. Traditionally, NER had relied on large databases like Wikipedia, to recognize known entities. Current neural networks are trained on tagged NER datasets to learn language patterns and identify entities the system has never seen before. For example, say your new startup appears in the news tomorrow. A good NER system should be able to classify it as an organization, even if the startup name is something new in the vocabulary. SEO tools with this capability provide a deeper understanding of a topic as they can identify more unique sub-topics in context. While search engines utilize automatic NER systems, it is always a good idea to enrich your data with schema markup conventions.

named entity recognition
Example of automatic NER ran on a sentence.

Query classification:

Query classification is the process where a search engine deciphers user intent from a short text input. The main challenge is related to ambiguity. Search engines like Google collect click-through data from users to validate search intent and train machine learning models around them. Techniques like word vectors and NER are also used in query classification algorithms to compare the topics in your query against a set of potential results.

Question answering:

This is concerned with building systems that automatically answer questions posed by humans in a natural language. In general, there are two types of questions to tackle: fact-based (i.e. what is the capital of France?) and open-domain (i.e. what is the future of SEO?). The latter usually involves analyzing dozens of search results for a given search query, and composing a ” multi-document summary” . Question-answering is one of the most active areas of research as it powers new mediums like voice search, which may require special considerations when it comes to SEO.

Automatic document summarization: 

Text summarization is the process of shortening a text document with software, in order to create a summary with the major points of the original document. Technologies that can make a coherent summary take into account variables such as length, writing style and syntax. Salesforce has made major breakthroughs in summarization.

Below is a screenshot from Frase that shows the taxonomy of a summary:

automatic summarization

Textual entailment: 

Entailment is a fundamental concept in logic, which describes the relationship between statements that hold true when one statement logically follows from one or more statements. A valid logical argument is one in which the conclusion is entailed by the premises, because the conclusion is the consequence of the premises. Textual Entailment algorithms can take a pair of sentences and predict whether the facts in the first necessarily imply the facts in the second one. This can be a useful technique to measure logic and cohesiveness in documents.

The importance of pillar pages and topic clusters

It is widely known that Google analyzes your full website to determine whether your content demonstrates topic authority in certain subjects. AI systems frequently employ document clustering as a technique to group data according to specific properties. For example, if your website has thousands of pages, a clustering algorithm may be able to group them by theme. If your website doesn’t present any clear themes, it might mean it lacks focus or expertise.

As a way to mimic document clustering algorithms,  SEO is shifting to a topic cluster model, where pillar pages act as nodes connecting subpages. This model is a fairly sophisticated way to organize your website’s information architecture and content strategy.

Topic cluster model for content marketing strategy

How to fit AI tools into your content creation process

So now that we better understand the way search engines ” think” , it is time to think about the overall workflow that will help us match our content to what your target audience is actually interested in.

1. Perform a semantic content audit

Crawl your entire website and analyze all of its topics. Does it look like your content is well organized around cohesive themes? Which are the most relevant topics? Which pages receive more internal links? A semantic content audit is a full width analysis of your website’s content that will measure topic breadth. Ideally, you would perform the same analysis on both your competitors and external industry thought leaders. The goal here is to understand the big picture and identify topic gaps. To accomplish this, you will need a tool that can crawl your full website, automatically extract key topics (through named entity recognition) and understand semantic relationships (through word vectors).

2. Define topic clusters

Browse topics from the content audit to identify groups and semantic associations. Build a list of sub-topics for each cluster. These topics should be optimized for two key metrics:

  • Search growth: topics that are receiving an increased exposure in search engine queries.
  • Competition: topics that your direct competitors might have failed to mention.

3. Develop pillar pages

Compose an outline of the main topics your pillar page should include. Define a search query you would like your pillar page to rank for and perform semantic analysis on the top results. Make sure your pillar page covers key topics optimizing for these two metrics:

  • Topic coverage: your content should cover the most relevant topics from SERP pages. Of course, be aware of keyword stuffing and make sure your story flows smoothly. This is where you can pay attention to document length; a story that aggressively covers all the top topics mentioned in SERP results will likely have to be longer. As an alternative, you may want to consider breaking down your topics into multiple shorter articles.
  • Content authenticity: while you have to align your content with the topics mentioned by SERP pages, you also have to find a unique angle to the story. One way to do this is using related topics without specifically using the same terms used by competitors. Remember word vectors understand similarity between topics, so by using good similar topics you may still rank high in search. Once you’ve accomplished a wide and authentic topic coverage, it is always valuable to incorporate proprietary insights nobody else has mentioned.

4. Develop content for each sub-topic

Use the same outline process described in point 3 to develop contents around sub topics that point to your pillar pages.

5. Continuous content optimization and re-publishing

Monitor what thought leaders publish about your target topics. This will help you come up with ideas to either write new content or optimize existing content around up-and-coming topics. The strategy of re-publishing content has proven to generate positive results in search rankings.

Can obsessive SEO limit your creativity?

Today’s marketers use many tools and there is certainly a sense of software fatigue. It almost looks like you have to break down your content creation workflow into stages that might end up limiting your creativity. Am I forgetting a keyword? Am I mentioning this keyword too much?

At some point, you have to thinking whether you are over-obsessing around content optimization. In my view, research and content creation should go hand in hand and work together in a more natural way. For example, based on what you are writing and your intended outcome, a system should be able to recommend topics in context. Helping writers incorporate SEO best practices into their creative workflow is something we think about at Frase.

There are different tools that can help you accomplish some of the analytical tasks explained in this post. At Frase, we’ve created a platform that helps content marketers perform large-scale semantic content audits, along with a writing tool that acts a Research Assistant. It it is totally free to try!

frase editor

15 things you should consider before building an AI startup

At Frase, we are using AI  to improve the way people write and research on the internet. Back in 2016, we came up with the idea of an AI-powered Research Assistant, an intelligent agent that would interact with the writer providing sources and ideas in context. Since the early days, Frase was largely a technology play that required lots of research, and we had (and continue to have) many un-answered questions. This technology uncertainty can have a major impact in your company, and you have to be ready to embrace it.

If you are about to start an AI company, particularly if you are a non-technical CEO, these are some things you should consider:

1. You might need a dataset that doesn’t exist

In simple terms, artificial intelligence is possible because we train computers to learn from data. For example, if have a dataset of 20,000 tweets, where 10,000 are positive and 10,000 are negative, we could train a model to detect sentiment in text. That sounds so easy, right?

When it comes to developing AI solutions, having access to a good dataset is frequently the most challenging part. Recently, IBM released the largest ever dataset of facial images with over 1 million tagged images; thats a big deal. In the case of Frase, one traditional dataset challenge is summarization. While there is a lot of activity in the space, I wouldn’t say there is a great dataset available. This may also mean there is an opportunity to build it.

When you hit a dataset wall, you either have to build your own dataset (time consuming and potentially costly, but possible if you have the time), use the best available proxy dataset, or simply move on and focus on other problems.

2. Servers get expensive: CPU vs. GPU

Without getting into any technical details, CPUs have been the most traditional hardware enabling the cloud over the past years. GPU machines are more modern and powerful, and they are used for more server intensive applications, including video games. The bad news is that some of the most promising technologies in AI require a GPU machine. It is bad news because GPUs are expensive, and possibly too much when you are a very early-stage startup. Alternative solutions: buy your own hardware, or get credits from companies like Google, Amazon or Microsoft.

3. AI is not always the right solution

Nowadays, AI is such a hot topic that everyone would always choose to make something ” AI-driven” . While the hype is great, sometimes basic statistics can do the job equally as good. Using the best neural network solution might only gain you 2-3% in accuracy, which is something your users might never feel….but they will feel the slowdown if you haven’t put up the costly architecture to support it.

4. Users will complain during the early stages

Some people are generally skeptical about AI, and often times they can be too judgmental. The early users of your AI product might feel a bit frustrated at the beginning. If your model has an accuracy of 65%, that means 35% of the time your user will not get satisfactory results. Of course, this sucks for the user.

5. AI is a black box, but users want to know how it works

You see data scientists brag about their inventions, but in reality they don’t really know what is going on inside of a neural network. By design, a neural network will have hidden layers,  and all you can actually see is an input and output. This can be frustrating when you are trying to improve a given model, but you don’t have enough data points to take new directions. In addition, users will frequently ask you about the ” algorithm” behind your magical product. And the reality is that you can’t really explain the inner workings, but only give a generic explanation of the process behind it.

6. Developing AI systems requires a rigorous scientific research process

Working on AI can feel like having a university department in your company. Successful machine learning practitioners usually have an academic background or actively contribute to academic journals. For example, in the area of text summarization, Salesforce has published numerous papers and some of their authors are industry leaders.

7. You have to follow Arxiv every day

In relation to point number 6, you have to live and breath Arxiv to keep up to do date. Even if you are not a data scientist or developer, you can only benefit from following what is happening in the space. Don’t be intimidated by the technical formatting and mathematics in most papers, even laymen can understand abstracts and directions of papers…valuable not only to teach yourself but maybe something you can pass on to your researcher.

8. Be ready for manual labor and repetitive tasks as you test your creation

Every now and then, you will have to spend time doing very repetitive tasks to evaluate a given model, or assemble a testing dataset. In relation to point 5, there is nothing better than using your own product to recognize its weaknesses.

9. Generalized versus highly specific models

Again, I will use summarization to illustrate this point. You could train a summarization model on a massive dataset of news articles, and that may work well when summarizing news articles. But what if you try to summarize a technology blog post, will it work equally well? In that case, you might consider training a separate model for technology blog posts. Of course, having numerous models creates a challenge related to infrastructure, performance, etc.

Once you’ve decided what model you want to work on, don’t try to predict the subject matter your users will use. You will fail. Design your systems abstractly and generally because somebody will always try something absolutely ridiculous on a demo.

10. Data scientists

There is an existing problem related to supply. It seems like the market demand for data scientists is very high, so salaries are through the roof. On the other hand, more and more data scientists are being trained in both universities and online courses. Talented software developers can become great date scientists over time.

11. Open source libraries are great, until you dig deeper

There are a few de-facto open source libraries and frameworks used in data science. Most of them are great, particularly those supported by big companies like TensorFlow (Google). Of course, AI is a very new field and some libraries are fairly new, which increases the risk of bugs or unexpected issues. Occasionally, you will also find that some libraries don’t release their best kept secrets. You’ll almost always see their developers go on to create businesses around their open source library that seem to work much better than yours. Don’t be afraid to reach out and start a conversation with them.

12. A data scientist cannot be your CTO

If you are assembling a team for your AI startup, I believe you need at least 2 partners: a CTO taking care of the whole platform, and a dedicated data scientist who is fully focused on machine learning.

13. Make your own data

We’ve already discussed the challenge with datasets. The ultimate solution is when your own product produces enough data to train models around it. The most valuable thing about today’s AI companies is their in-house generated data. Of course, this may take time and be a long term strategy. Many large companies are starting to look inside and realize they have massive amounts of unstructured data. This represents a major opportunity for them to develop AI solutions, although they might not have in-house data science talent.

14. User experience is key for AI systems to succeed

Related to point 4 (users will complain), users will always hit edge cases where your AI system gets confused. You must develop and think of UX concepts to either hide or mitigate your model’s errors. A good example is a bot: by having the human-AI interaction, we seem to create a more guided journey where the user can help your model take less risks.

15. “How would a human do it?”

So if you are thinking of starting an AI company, you probably have an idea that will revolutionize a certain human process. Something that helps me think about AI solutions is asking myself how a human would solve the problems.

Good luck.

What scares you the most about Artificial Intelligence?

We are constantly reading stories about new applications of artificial intelligence, and for the most part, it is good news. On the other hand, every now and then, we also read stories about the negative consequences of artificial intelligence. Over the past few years we’ve heard concerning remarks from thought leaders ranging from Stephen Hawking to Elon Musk describing the darker side of artificial intelligence.

In partnership with Lucid, the leading Human Answers Platform, we asked 300 people about their concerns regarding artificial intelligence:

It seems like most people are worried about humans becoming fully reliant on computers, but almost one third of respondents were specifically concerned about AI destroying humanity. To further illustrate these perspectives on the issue, we collected a few of the most shared articles addressing some key questions:

Will artificial intelligence take our jobs?

Robots will destroy our jobs – and we’re not ready for it (theguardian.com) – Jan 11 2017

  • In a classic example of optimism bias, while approximately two-thirds of Americans believe that robots will inevitably perform most of the work currently done by human beings during the next 50 years, about 80% also believe their current jobs will either “definitely” or “probably” exist in their current form within the same timeframe.
  • As Enbar observed, the most urgent question we must answer is not one of robots’ role in the workforce of 21st-century America, but rather one of inclusion – and whether turning our backs on those who need our help the most is acceptable to us as a nation.

Will Robots Take Our Children’s Jobs? (nytimes.com) – Dec 11 2017

  • The Associated Press already has used a software program from a company called Automated Insights to churn out passable copy covering Wall Street earnings and some college sports, and last year awarded the bots the minor league baseball beat.
  • A much-quoted 2013 study by the University of oxford Department of Engineering Science — surely the most sober of institutions — estimated that 47 percent of current jobs, including insurance underwriter, sports referee and loan officer, are at risk of falling victim to automation, perhaps within a decade or two.
  • Just this week, the McKinsey Global Institute released a report that found that a third of American workers may have to switch jobs in the next dozen or so years because of A.I.

So, workers, experts say artificial intelligence will take all of our jobs by 2060 (newsweek.com) – May 31 2017

  • There is a 50 percent chance that AI be able to perform all human tasks better than humans in 45 years, and all human jobs are expected to be automated within the next 120 years, according to  a survey of 352 AI researchers who published at either the Conference on Neural Information Processing Systems or the International Conference on Machine Learning in 2015.
  • Transportation innovators like Uber‘s Travis Kalanick and Tesla‘s Elon Musk have predicted that automated vehicles will disrupt the industry over the course of the next 20 years, and Musk estimates ” it will be very unusual” for cars that aren’t autonomous to be manufactured in the next decade.
  • Researchers are just now beginning to understand the ways in which automation can interact with the human body, and the impact AI will have on the health industry in the coming decades is impossible to estimate, except the idea that it will be significant.

Why AI could destroy more jobs than it creates, and how to save them (techrepublic.com)

  • ” The accumulated doubling of Moore‘s Law, and the ample doubling still to come, gives us a world where supercomputer power becomes available to toys in just a few years, where ever-cheaper sensors enable inexpensive solutions to previously intractable problems, and where science fiction keeps becoming reality,” Brynjolfsson and Andrew McAfeeassociate director of the Center for Digital Business at MIT, write in the book.
  • ” There’s no economic law that says ‘You will always create enough jobs or the balance will always be even’, it’s possible for a technology to dramatically favour one group and to hurt another group, and the net of that might be that you have fewer jobs,” said Brynjolfsson.
  • I already talked to one big law firm and they said they’re not hiring as many of those sorts of people because a machine can scan through hundreds of thousands or millions of documents and find the relevant information for a case or a trial much more quickly and accurately than a human can,” said Brynjolfsson.
  • ” According to our estimate, 47 percent of total US employment is in the high risk category, meaning that associated occupations are potentially automatable over some unspecified number of years, perhaps a decade or two,” they predict in the report The Future of Employment.
  • Imagine you are a home help aid or a nurse and you see an unusual mole or a lesion and are not quite sure what it is, you could use augmented reality glasses or other tools to send a photograph of that growth to a human expert or even an expert system, a decision-making system that analyses the shape and contours of that lesion and gives advice on whether you need to bring that person in for treatment,” he added.

Will Artificial Intelligence Take Our Jobs? We Asked A Futurist (huffingtonpost.com.au) – Feb 15 2017

  • ” What our simulations show is that one aspect to the debate around artificial intelligence that is frequently lost is the fact that AI and digitisation will impact certain activities in our everyday lives, such as marketing automation or robotic advice, but it may not fully remove the 50 percent of jobs that some pundits talk about.
  • Disruption is a signal from the future that it is high time to adapt, and that smart investments in the right hardware and software, which includes your own thinking software, have to be made.”
  • ” To me it is astounding that in Australia we are so obsessed with bricks and mortar property, but we are less concerned with investments in our own intellectual property, and AI certainly raises the stakes to ensure our thinking remains future-compatible.
  • ” As a global futurist and futurephile, one of the things that excites me about artificial intelligence is the death of procrastination — anything ‘left brained’ that we avoided and delayed doing, like taxes, filing, travel expense coding, receipt management, and updating our calendars will be procrastinated on no longer.

Will artificial intelligence make humans fully reliant on computers?

Nicholas Carr: ‘Are we becoming too reliant on computers?’ (theguardian.com) – Jan 17 2015

  • As digital technology sprints forward, we’re not just learning about the possibilities of computer intelligence, we’re also getting a lesson in its limits.
  • The most subtle of our human skills – our common sense, our ingenuity and adaptability, the fluidity of our thinking – remain well beyond the reach of programmers.
  • It’s possible to imagine self-driving cars operating independently in tightly controlled circumstances, such as on dedicated highway lanes, but as long as cars have to handle the vagaries of real-world traffic in cities and neighbourhoods, a watchful, adept human will continue to have a place in the driver’s seat.
  • The shortcomings of robotic drivers and pilots reveal that the skills we humans take for granted – our ability to make sense of an unpredictable world and navigate our way through its complexities – are ones that computers can replicate only imperfectly.

Are computers making our lives too easy? (bbc.com)

  • And it does seem to me that this is a naive approach to take when thinking about technology in all its forms: in particular when thinking about computer automation, but also when thinking about our own desires and experience of life and of the world.
  • NC: I think that gets to a fundamental point, which is that the question isn’t, “should we automate these sophisticated tasks?”, it’s “ how should we use automation, how s hould we use the computer to complement human expertise, to offset the weaknesses and flaws in human thinking and behaviour, and also to ensure that we get the most out of our expertise by pushing ourselves to ever higher levels?”
  • What happens then is that you not only lose the distinctive strengths of human intelligence – the ability of human beings to actually question what they are doing in a way that computers can’t – but you push forward with these systems in a thoughtless way, assuming that speed of decision-making is the most important thing.
  • And I said, you know, I’m not saying that there is no role for labour-saving technology; I’m saying that we can do this wisely, or we can do it rashly; we can do it in a way that understands the value of human experience and human fulfilment, or in a way that simply understands value as the capability of computers.
  • And in the end I do think that our latest technologies, if we demand more of them, can do what technologies and tools have done through human history, which is to make the world a more interesting place for us, and to make us better people.


Will artificial intelligence destroy humanity?

Stephen Hawking: Artificial Intelligence Could End Human Race (livescience.com) – November 2017

  • ” The development of full artificial intelligence (AI) could spell the end of the human race,” Hawking told the BBC .
  • The CEO of the spaceflight company SpaceX and the electric car company Tesla Motors told an audience at MIT that humanity needs to be ” very careful” with AI, and he called for national and international oversight of the field.
  • ” I don’t see any reason to think that as machines become more intelligent … which is not going to happen tomorrow — they would want to destroy us or do harm,” Ortiz told Live Science.

Artificial Intelligence Is Our Future. But Will It Save Or Destroy Humanity? (futurism.com) – Sep 29 2017

  • “Normally, the way regulations are set up is a whole bunch of bad things happen, there’s a public outcry, and after many years, a regulatory agency is set up to regulate that industry,” Musk said during the same NGA talk.
  • Russian President Vladimir Putin recently stoked this fear at a meeting with Russian students in early September, when he said, “The one who becomes the leader in this sphere will be the ruler of the world.” These comments further emboldened Musk’s position  —  he tweeted  that the race for AI superiority is the “most likely cause of WW3.”
  • Facebook’s Mark Zuckerberg went even further during a Facebook Live broadcast back in July, saying   that Musk’s comments were “pretty irresponsible.” Zuckerberg is optimistic about what AI will enable us to accomplish and thinks that these unsubstantiated doomsday scenarios are nothing more than fear-mongering.
  • “We must first understand the concepts of how the brain works and then we can apply that knowledge to AI development.” Better understanding of our own brains would not only lead to AI sophisticated enough to rival human intelligence, but also to better brain-computer interfaces to enable a dialogue between the two.

South Korean university is secretly developing killer AI robot army that could destroy humanity, scientists fear (thesun.co.uk) – Apr 05 2018

  • A TOP South Korean university is secretly developing a killer Artificial Intelligence robot army that could destroy humanity, scientists fear.
  • KAIST university allegedly launched a new AI weapons lab in February, leading dozens of researchers to believe the products will ” have the potential to be weapons of terror” .
  • A top university in South Korea is secretly developing a killer AI robot army, scientists have warned.

AI Could Kill Us All: Meet the Man Taking the Threat Seriously (thenextweb.com) – Mar 08 2014

  • “So yes, the thing is that what we actually need to do is to try and program in essentially what is a good life for humans or what things it’s not allowed to interfere with and what things it is allowed to interfere with… and do this in a form that can be coded or put into an AI using one or another method.”
  • “Proper AI of the (kind where) ‘we program it in a computer using some method or other’… the uncertainties are really high and we may not have them for centuries, but there’s another approach that people are pursuing which is whole-brain emulations, some people call them ‘uploads’, which is the idea of copying human brains and instantiating them in a computer.
  • “We would just have to worry about the fact of an extremely powerful human – a completely different challenge but it’s the kind of challenge that we’re more used to – constraining powerful humans – we have a variety of methods for that that may or may not work, but it is a completely different challenge than dealing with the completely alien mind of a true AI.” David from ‘AI: Artificial Intelligence
  • “(If) someone comes up with a nearly neat algorithm, feeds it a lot of data, this turns out to be able to generalize well and – poof – you have it very rapidly, though it is likely that we won’t have it any time soon, we can’t be entirely confident of that either.”
  • We would want to have done at least enough philosophy that we could get the good parts into the AI so that when it started extending it didn’t extend it in dangerous or counterproductive areas, but then again it would be ‘our final invention’ so we would want to get it right.

61% of Americans don’t believe AI can fix the Fake News problem, yet.

At Frase we are constantly thinking about the future of content creation, and how AI will play a role in making us a more informed society. This post discusses what the general public thinks about AI as a possible tool to combat fake news. It also explores possible alternatives.

In partnership with Lucid, the leading Human Answers Platform, we surveyed a demographic balanced audience in the US with the question: “can Artificial Intelligence (AI) fix the Fake News problem?”

Now, let’s give more background to the Fake News controversy. Around the 2016 US presidential election, political scientists from Princeton University reported that 1 in 4 Americans visited a fake news site. Considering that 62 percent of Americans get their news from social media, platforms like Facebook or Twitter should play a prominent role in combating the Fake News epidemic.

When Mark Zuckerberg told Congress that Facebook would use artificial intelligence to detect fake news posted on the social media site, he wasn’t particularly specific about what that meant or how feasible that would be. We know Facebook is currently working on a number of initiatives to mitigate the fake news problem, including an increased focus on content from friends and family as it relates to Facebook’s News Feed.

Most people don’t believe AI is the solution to Fake News

Based on our survey we can conclude that the majority of Americans don’t believe AI is ready to tackle the problem yet. That said, it is worth looking at some innovations occurring in the space. In fact, I could argue that the proliferation of fake articles on the Internet plays in favor of AI’s chances of learning how to detect fake stories.

After doing some research, I discovered that fake news can generally fall into these different categories:

  • entirely false or fake information;
  • discussion of real events with inaccurate interpretations;
  • pseudoscientific articles that pretend to have a research foundation;
  • satirical content;
  • articles that include a mix of unreliable opinions from online forums or social media platforms, most notably Twitter.

My initial reaction was that an AI system could be trained to classify articles into these categories. But of course, the system would not be perfect and would have a high degree of bias based on who labels the training dataset. In addition, a given website might have a combination of real and fake information, which makes the data collection more time consuming.

A company called Fakebox found an interesting solution. Their answer isn’t detecting fake news, but detecting real news! Real news is factual and has little to no interpretation. There are also plenty of reputable sources to build a dataset from.

In the words of Fakebox’s co-founder:

We trained a machine learning model that analyzes the way an article is written, and tells you if its similar to an article written with little to no biased words, strong adjectives, opinion, or colorful language. It can have a hard time if an article is too short, or if it’s primarily comprised of other people’s quotes (or Tweets). It is not the end all solution to fake news. But hopefully it will help spot articles that need to be taken with a grain of salt.

Another interesting tool is FakeRank, like Google’s PageRank for Fake News detection, only that instead of links between web pages, the network consists of facts and supporting evidence. It leverages knowledge from the Web with Deep Learning and Natural Language Processing techniques to understand the meaning of a news story and verify that it is supported by facts.

Unfortunately, AI can be also used for creating even more fake content

Here are few quotes that illustrate the dark side of AI in content creation:

  • “Video Machine-learning experts have built a neural network that can manipulate videos to create fake footage – in which people appear to say something they never actually said.” (theregister.co.uk)
  • “So incredible, that of the 1,600 reviews posted on the book’s Amazon page in just a few hours, the company soon deleted 900 it suspected of being bogus: written by people who said they loved or hated the book, but had neither purchased nor likely even read it.” (scientificamerican.com)
  • “In a paper published this month, the researchers explained their methodology: Using a neural network trained on 17 hours of footage of the former US president’s weekly addresses, they were able to generate mouth shapes from arbitrary audio clips of Obama’s voice.” (qz.com)
  • “Artificial intelligence can use still images of people and turn them into FAKE video – and even put words in their mouths The system takes an image of a person as well as an audio clip to create a video It identifies facial features in photos using algorithms that recognize the face. As audio plays, it manipulates the mouth so it looks like the person is speaking. With improvement, the researchers say the AI could make fake videos seem real. The system could eventually render video evidence unreliable in court cases.” (dailymail.co.uk)

So…could blockchain come to the rescue?

Besides AI, there is another technology that could play a major role in solving the fake news problem: blockchain. I found two projects that propose very similar approaches, as per their websites:

  • Eventum makes it easy for people to get paid for reporting on real-time events and information around them, while developers can get any data they want in a cheap, fast and secure data feed. It uses a ‘wisdom of the crowd’ principle and ‘blockchain-as-a-court-system’ on the Ethereum network to solve problems, including fake news, eSports data extraction and real-time feedback to AI algorithms.
  • PUBLIQ, a non profit foundation, launched a decentralized content platform that offers merit based, real-time, and instant remuneration to authors from all over the world, in order to combat fake news and biased reporting. The PUBLIQ Foundation builds a trust based ecosystem that is operated by authors, journalists, bloggers and advertisers around the world and encourages them to share their written perspectives in a safe and encrypted way.

Take aways

The survey results are not surprising considering that even Facebook can’t articulate a clear plan as to how AI will solve the fake news problem. It seems like instead of attempting to find out if each story is factual, we might be better off examining its source and distributors. This is where blockchain might be able to help. In addition, AI could probe valuable in recognizing if the content of a story is very similar to that of other stories that had proven to be fake.

How is artificial intelligence transforming content creation?

The rate of digital content creation has exploded and artificial intelligence promises automation, personalization, and scale. Walter Frick at Harvard Business Review explained why AI can’t write articles yet in 2017, but also concluded that it can help us better write our own.

The following list features existing use cases where AI is leveraged to automate, assist or augment any step of the content creation process:

Newsroom automation

To deal with the growing volume of information and gain competitive advantage, the news industry has started to explore and invest in news automation.

  • Washington Post – “Heliograf is creating a new model for hyperlocal coverage,” said Jeremy Gilbert, The Post’s director of strategic initiatives. “In the past, it would not have been possible for The Post to staff more than a handful of the most significant games each week. Now, we’ll be able to cover any game that we have data for, giving the teams and fans near-instant coverage to read and share.”
  • Associated Press – The Associated Press uses Wordsmith to transform raw earnings data into thousands of publishable stories, covering hundreds more quarterly earnings stories than previous manual efforts.
  • ReutersReuters Tracer automates end-to-end news production using Twitter data. It is capable of detecting, classifying, annotating, and disseminating news in real time for Reuters journalists without manual intervention.

Advertising copy generation

As ad-tech companies develop a better understanding of audience preferences, they will aim to personalize advertising messaging as much as possible.

  • Saatchi Trained IBM Watson to Write Thousands of Ads for Toyota: Saatchi wrote 50 scripts based on location, behavioral insights and occupation data that explained the car’s features to set up a structure for the campaign. The scripts were then used to train Watson so it could whip up thousands of pieces of copy that sounded like they were written by humans.
  • Persado enables brands to engage each consumer across all channels with personalized emotional language. “Imagine having a data scientist and a copywriter for each person in your audience; you get the language that performs and the analytics explaining why, resulting in more business and unseen insights”.

Data-driven storytelling

Think telling a story out of an Excel spreadsheet.

Understanding and Producing Video

Video is among the fastest growing forms of content creation. Extracting meaning from videos and automatically creating new ones will be a growing opportunity.

  • Reelycomputer vision and deep learning specialized in transforming video into actionable data.
  • Wibbiz – automatically creates premium videos from text.

Robo-writing & Paraphrasing

Input a topic and an AI will craft a human-readable article for you. This is what a few startups are building to help writers scale their content production. Examples include Articoolo, AI Writer and WordAI.

SEO and keyword research

Search engines now analyze your website through the lense of AI algorithms, which means they understand the overall context and theme, not just keywords. New AI tools for keyword research help you expand the scope of your keyword research process.

  • Marketmuse – MarketMuse is an AI-powered research assistant that accelerates content creation and optimization so you can win in organic search.
  • Twinword – an AI-powered keyword research tool that uses smart filters to quickly narrow down your keywords.


When you are doing research, it is easy to waste time browsing sources that are not relevant to you. Summarization helps you quickly skim articles and decide whether they are worth reading.

  • Agolo – Agolo uses artificial intelligence to create summaries from your information in real-time.
  • Salesforce’s has published ground-breaking research related to summarizing long-form text.

Image editing

What if AI could do all the image editing for us?

  • Object AI’s technology automatically detects the subject of the photo and makes photo editing intelligent.
  • Adobe’s prototype AI tools let you instantly edit photos and videos

Personal communication

One big area of content creation includes email. Can AI help us create better content personalized to our audience?

  • Crystal Knows – it understands the personality of your co-workers by analyzing their email language. It then helps you optimize your emails accordingly.

Voice recognition for note-taking

A lot of knowledge may get forgotten in conversations and meetings. Can AI help us turn voice into comprehensive text notes?

  • TetraTetra uses AI to take notes on phone calls, to help you focus, remember the details, and keep your team in sync.

Proofreading and grammar

AI models trained to make us better writers.

  • Atomic Reach’s editor provides real time, custom and predictive recommendations to create perfectly written content with your audience in mind every time.
  • Grammarly – Grammarly makes sure everything you type is clear, effective, and mistake-free.

Question Answering

Bots like Siri or Google Assistant are capable of answering fact-based questions, like “What is the capital of Spain?”. However, they often struggle to reply open-ended questions, like “Why is unemployment growing in Spain?”. In this situation, an AI would have to use multiple small snippets of evidence to make a combined judgement.

  • IBM proposes to navigate through knowledge sources like Wikipedia to search for the answer, prioritizing some documents in their search over others; and combining knowledge from different parts of the documents they read to reason out an answer.

Question Generation

Can computers ask intelligent questions?

  • Maluuba (Microsoft) – while there have been many advances in machine reading comprehension to develop models that can answer questions, Maluuba has been working to teach machines to ask informative questions.