Artificial Intelligence

   

Understanding Data Virtualization for Learning Models

Authors: Tal Ben Yakar

Data are most crucial and essential building component for any data mining and AI applications exist. More significantly, deep learning approaches require massive datasets. We know that the theory and algorithms have been around for quite a while however the ability to process the right amounts of data brought us to the recent breakthroughs in the field. A challenge comes up in a case of a small dataset, comparing to the required training data required. However, mostly, getting this data are neither an easy nor a cheap task, many annotating services take advantage of the problem and charge for tagging data-sets campaigns, those could cost hundreds of dollars easily and yet with an uncertain quality. as the task of generalization at hand, we wondered how to exploit the minimal data we have and still have an AI system to learn well. In this paper, we overview methods for solving the problem and suggest solutions in order to overcome the challenge.

Comments: 9 Pages.

Download: PDF

Submission history

[v1] 2017-06-19 08:06:35

Unique-IP document downloads: 9 times

Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.

Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.

comments powered by Disqus