Companies ranging from high tech startups to global multinationals see artificial intelligence as a key competitive advantage in an increasingly competitive and technical market.

But, the AI industry moves so quickly that it’s often hard to follow the latest research breakthroughs and achievements, and even harder to apply scientific results to achieve business outcomes.

To help you develop a robust AI strategy for your business in 2020, I’ve summarized the latest trends across different research areas, including natural language processing, conversational AI, computer vision, and reinforcement learning. I’ve also included external education you can follow to further your expertise.

Natural Language Processing

In 2018, pre-trained language models pushed the limits of natural language understanding and generation. These also dominated NLP progress last year.

If you’re new to NLP developments, pre-trained language models have made practical applications of NLP significantly cheaper, faster, and easier as they allow to pre-train an NLP model on one large dataset and then quickly fine-tune it to adapt to other NLP tasks.

Teams from top research institutions and tech companies explored ways to make state-of-the-art language models even more sophisticated. Many improvements were driven by massive boosts in computing capacities, but many research groups also found ingenious ways to lighten models while maintaining high performance.

Thus, current research trends are as follows:

  • The new NLP paradigm is “pre-training + fine-tuning”. Transfer learning has dominated NLP research over the last two years. ULMFiT, CoVe, ELMo, OpenAI GPT, BERT, OpenAI GPT-2, XLNet, RoBERTa, ALBERT – this is a non-exhaustive list of important pre-trained language models introduced recently. Even though transfer learning has definitely pushed NLP to the next level, it is often criticized for requiring huge computational costs and big annotated datasets.
  • Linguistics and knowledge are likely to advance the performance of NLP models. The experts believe that linguistics can boost deep learning by improving the interpretability of the data-driven approach. Leveraging the context and human knowledge can further improve the performance of NLP systems.
  • Neural machine translation is demonstrating visible progress. Simultaneous machine translation is already performing at the level where it can be applied in the real world. The recent research breakthroughs seek to further improve the quality of translation by optimizing neural network architectures, leveraging visual context, and introducing novel approaches to unsupervised and semi-supervised machine translation.

Conversational AI

Conversational AI is becoming an integral part of business practice across industries. More companies are adopting the advantages chatbots bring to customer service, sales, and marketing.

Even though chatbots are becoming a “must-have” asset for leading businesses, their performance is still very far from human. Researchers from major research institutions and tech leaders have explored ways to boost the performance of dialog systems:

  • Dialog systems are improving at tracking long-term aspects of a conversation. The goal of many research papers presented over the last year was to improve the system’s ability to understand complex relationships introduced during the conversation by better leveraging the conversation history and context.
  • Many research teams are addressing the diversity of machine-generated responses. Currently, real-world chatbots mostly generate boring and repetitive responses. Last year, several good research papers were introduced aiming at generating diverse and yet relevant responses.
  • Emotion recognition is seen as an important feature for open-domain chatbots. Therefore, researchers are investigating the best ways to incorporate empathy into dialog systems. The achievements in this research area are still modest but considerable progress in emotion recognition can significantly boost the performance and popularity of social bots and also increase the use of chatbots in psychotherapy.

Computer Vision

During the last few years, computer vision (CV) systems have revolutionized whole industries and business functions with successful applications in healthcare, security, transportation, retail, banking, agriculture, and more.

Recently introduced architectures and approaches like EfficientNet and SinGAN further improve the perceptive and generative capacities of visual systems.

The trending research topics in computer vision are the following:

  • 3D is currently one of the leading research areas in CV. This year, we saw several interesting research papers aiming at reconstructing our 3D world from its 2D projections. The Google Research team introduced a novel approach to generating depth maps of entire natural scenes. The Facebook AI team suggested an interesting solution for 3D object detection in point clouds.
  • The popularity of unsupervised learning methods is growing. For example, a research team from Stanford University introduced a promising Local Aggregation approach to object detection and recognition with unsupervised learning. In another great paper, nominated for the ICCV 2019 Best Paper Award, unsupervised learning was used to compute correspondences across 3D shapes.
  • Computer vision research is being successfully combined with NLP. The latest research advances enable robust change captioning between two images in natural language, vision-language navigation in 3D environments, and learning hierarchical vision-language representation for better image caption retrieval and visual grounding.

Reinforcement Learning

Reinforcement learning (RL) continues to be less valuable for business applications than supervised learning, and even unsupervised learning. It is successfully applied only in areas where huge amounts of simulated data can be generated, like robotics and games.

However, many experts recognize RL as a promising path towards Artificial General Intelligence (AGI), or true intelligence. Thus, research teams from top institutions and tech leaders are seeking ways to make RL algorithms more sample-efficient and stable. The trending research topics in reinforcement learning include:

  • Multi-agent reinforcement learning (MARL) is rapidly advancing. The OpenAI team has recently demonstrated how the agents in a simulated hide-and-seek environment were able to build strategies that researchers did not know their environment supported. Another great paper received an Honorable Mention at ICML 2019 for investigating how multiple agents influence each other if provided with the corresponding motivation.
  • Off-policy evaluation and off-policy learning are recognized as very important for future RL applications. The recent breakthroughs in this research area include new solutions for batch policy learning under multiple constraintscombining parametric and non-parametric models, and introducing a novel class of off-policy algorithms to force an agent towards acting close to on-policy.
  • Exploration is an area where serious progress can be achieved. The papers presented at ICML 2019 introduced new efficient exploration methods with distributional RLmaximum entropy exploration, and a security condition to deal with the bridge effect in reinforcement learning.

This is a quick and high-level overview of new AI & machine learning research trends across the most popular subtopics of NLP, conversational AI, computer vision, and reinforcement learning, many of which have implications for business.

Many more breakthroughs in applied AI are expected in 2020 that will build on notable technical advancements in machine learning achieved in 2019.