Description
Getting started with a natural language processing and neural networks
is easier nowadays thanks to the numerous talks and tutorials. The
goal is to dive deeper for those who already know the basics, or want
to expand their knowledge in a machine learning field.
The talk will start with the common use cases that can be generalized
to the specific problems in a NLP world. Then I will present an
overview of possible features that we can use as input to our network,
and show that even simple feature engineering can change our results.
Furthermore, I will compare different network architectures - starting
with the fully connected networks, through convolution neural networks
to recursive neural networks. I will not only considering the good
parts, but also - what is usually overlooked - pitfalls of every
solution.
All of these will be done considering number of parameters, which
transfers into training and prediction costs and time. I will also
share a number of “tricks” that enables getting the best results even
out of the simple architectures, as these are usually the fastest and
quite often hard to beat, at the same time being the easiest to
interpret.