Get Complete Hands-On Learning Experience with Course Support Mentorship
Deep learning algorithms perform much better, by providing godlike accuracy, when there are tons of data accessible for Image classification and Restoration, Object detection, and even recognizing Handwritten digits.
Some of It’s Real-World Applications are:
Face recognition, etc
Deep learning uses Neural networks to train machines and automatize the tasks performed by human visual systems.
Deep learning is splashed as a key technology and sub-branch of AI and ML because Deep learning follows the workings of the human brain to phrase impossible learning problems as empirical loss minimization via gradient descent, as a conceptually super simple thing.
I have transitioned my career from Manual Tester to Data Scientist by upskilling myself on my own from various online resources and doing lots of Hands-on practice. For internal switch I sent around 150 mails to different project managers, interviewed in 20 and got selected in 10 projects.
When it came to changing company I put papers with NO offers in hand. And in the notice period I struggled to get a job. First 2 months were very difficult but in the last month things started changing miraculously.
I attended 40+ interviews in span of 3 months with the help of Naukri and LinkedIn profile Optimizations and got offer by 8 companies.
Now I want to use my experience to help people upskill themselves and land their dream Data Science jobs!
In this assignment, we have covered, use cases of tensorflow in real world, tensors definition, tensor types, tensor formats, tensorflow methods, mathematical operations in tensorflow, numpy compatibility, gradient and finally a small end to end linear classifier using the concepts of tensorflow we learnt from the beginning.
In this this assignments we have mentioned every fundamentals of Pytorch which one should know while working on this library.
In this assignment we have mentioned all the concepts that one should know before getting into neural networks. Each and every concept that is required for working on neural nets has been mentioned in this assignment.
In this assignment, we have covered definition of activation function, its uses in deep learning, types of activation function like linear, binary & non linear and their subtypes. We have also covered how to use it in coding.
In this assignment we have trained multiple models using mnist handwritten digit dataset and learnt how to choose number of layers and number of neurons for the model. We have also explained how changing the architecture of the neural network affects the model.
In this assignment, we will see what is a loss function, how to choose a loss function for a particular problem. We have covered multiple cases like regression, binary and multi-class classification where you will choose loss function accordingly.
In this assignment we will learn how we go from left to right in a neural network via forward propagation and then we do back propagation to update the weights in our neural network which eventually helps us in reducing our loss.
In this assignment, we have covered what is an optimizer, its different types like gradient descent, sgd, momentum, adagrad, adadelta, adam, mini batch, RMSprop, NAG, FTRL with a working example .
In this assignment, you will learn different types of regularization techniques like dropout, L1, L2 and early stopping using the example of fashion mnist data. You will build multiple models to get deeper insights.
In this assignment, you will implement all learnings from the neural network assignments on multiple datasets.
In this assignment we will learn OOPs concepts like classes, objects, member variables, methods, attribute, and many more.
This assignment will teach you to code neural network completely from scratch. After this assignment one will be very much comfortable on creating any kind of neural nets from scratch and that's what the main agenda of this course.
In this assignment, we will cover basic concepts of CNN like what is CNN, padding, pooling, striding, fully connected layer using cifar dataset.
In this assignment, we will learn need of preprocessing, image augmentation, steps to use in the preprocessing, use of imagedatagenerator, and other steps using Tensorflow.
In this assignment you will learn difference between fit and fit generators, what is flow from directory method, in the tensorflow by implementing it on dog cat classification dataset.
In this assignment we will learn what is transfer learning, why do we need it, what is resnet, resnet architecture, how to implement transfer learning using resnet etc.
The main purpose of this assignment is to get a complete idea of Alexnet and code it completely from Scratch.
The main purpose of this assignment is to get a complete idea of VGG-16 and code it completely from Scratch.
In this assignment, we will learn functionalities of opencv. Opencv is a large open source library for computer vision, machine learning and image processing. We will learn how to process an image and video too.
In this assignment, we will first teach you installation of yolo and its dependencies, getting the data, data annotation, how to create bounding boxes , how to train different yolo models, tensorboard implementation, and making predictions.
In this assignment, we will learn how to use detectron for object detection, image segmentation and panoptic segmentation.
In this assignment we will learn what is image segmentation, it's importance, types of image segmentation, for example region based segmentation, Edge Detection Segmentation.
In this assignment we will see how we can use DeOldify to colorize black & white images.
In this assignment we will see what is GANs, it's applications and how to implement it on the dataset.
In this assignment, you will learn about NLP, top 8 use cases in industry, concepts of tokenization, punkt, lemmatization, stemming , stemming vs lemmatization and stop words removal. All these concepts will help you in dealing with sentences and paragraphs later.
This assignment covers concepts of countvectorizer, bag of words, TD-IDF, N-Grams, POS tagging which will teach you how you can convert words into vectors. It will help you dealing with text data. You will find all the necessary resources to do the assignment inside the file.
This assignment covers advanced concepts like Gensim, Word2Vec, continuous bag of words(CBOW), skip-Grams which will teach you more ways of converting words into vectors. It is extended version of assignment 2 converting words into vectors.
It is a project on predicting sentiment of words using logistic regression technique and all the learnings we had in previous 3 assignments. It covers countvectorizer, lemmatization, stemming, TF-IDF etc.
In this assignment, we will revise basics of artificial neural network with practical implementation using the dataset of credit card churn modelling.csv.
In this assignment you will build and train a model using RNN on the mnist dataset. You will learn use cases of RNN by practical implementation.
In this assignment we will learn what Is LSTM, it's implementation using RNN using imdb dataset.
In this assignment, we will learn about GRU, it's architecture, applications, Bidirectional LSTM RNN and the implementation of bidirectional LSTM using Tensorflow.
In this assignment, we will learn about encoders, decoders, hidden state, attention model transformer, positional encoding, residuals, hugging face, and transformers pipeline.
In this assignment we will see implementation of bert model in two steps, pretraining and fine-tuning. We will do fine tuning for QA, sentiment analysis and Named Entity Recognition(NER).
The Autocorrect with NLP project provides to way to get recommendations of words when you start typing the word. It also recommends correct spellings for wrongly spelled words based on text distance. It has an interface which is created in Google Colab.
In this assignment, we will learn basics of time series i.e what is time series, its use cases with real world examples, time series components, and we will use a simple dataset to visualize time series data.
In this assignment, we will learn what is called stationarity series, how we check stationarity in the data using visualization, augmented dickey fuller test, KPSS test.
In this assignment, we will cover topics like correlation, autocorrelation, partial autocorrelation function and differencing using multiple datasets to explain each concepts.
In this assignment, we will learn about arima models, terms used in arima, AR & MA models, order of differencing, order of AR term, order of MA term, how to build arima model, and metrics used in it.
In this assignment, we will cover auto arima, interpreting residual plots in arima model, automatically building sarima model, and building sarimax with exogenous variable.
In the assignment, we will cover model selection, AIC, BIC, and it's implementation using drug sales data
In this assignment, we will cover varma with its implementation.
In this assignment, we will cover auto regressive concepts with example, model and implementation.
In this assignment, we will see the implementation of ANN for time series forecasting on stock prices dataset.
In this assignment we will cover the effect of CNN on numerical data and implement CNN for time series forecasting.
In this assignment you will be learning about implementing time series forecasting using LSTM neural networks.
An image classification project where image augmentation and pretrained net is used.
An intermediate project to detect facial emotion using openCV.
After centuries of intense whaling, recovering whale populations still have a hard time adapting to warming oceans and struggle to compete everyday with the the industrial fishing industry for food.
So in order to aid whale conservation efforts, scientists use photo surveillance systems to monitor ocean activity. They basically use the shape of whale’s tails and unique marking found in the footage to basically identify what species of whale they are analyzing and minutely log whale pod dynamics and movements.
For the past 40 years, most of this work has been done manually by individual scientist, leaving a huge heap of data untapped and underutilized. So in this project the main task is to basically build an algorithm to identify individual whales in the images. We will be basically analyzing HappyWhale’database of around 25k images.
The Flower classification with CNN project aims to classify flowers into 5 classes(rose, sunflower, dandelion, daisy and tuplis) using Convolutional Neural Networks. The project is deployed using Gradio platform which provides an Interface for the project where the user can upload a flower image and get the class of the flower along with the probability percentage.
In this NLP project, we are going to build a chatbot using deep learning techniques. The chatbot will be trained on the dataset which contains categories (intents), pattern and responses. We use Neural Networks to classify which category the user’s message belongs to and then we will give a random response from the list of responses. We will build a web app for chatbot using flask.
Sound Classification is one of the most widely used applications in Audio Deep Learning. It involves learning to classify sounds and to predict the category of that sound. This type of problem can be applied to many practical scenarios e.g. classifying music clips to identify the genre of the music, or classifying short utterances by a set of speakers to identify the speaker based on the voice.
So in this project we will use the audio dataset and perform some transformation which will then, suit the computer vision applications. This project basically is to notify that CNN are not just for images application.
In this tutorial, we will build a spam detection model. The spam detection model will classify emails as spam or not spam. This will be used to filter unwanted and unsolicited emails. We will build this model using BERT and Tensorflow. BERT will be used to generate sentence encoding for all emails. Finally, we will use Tensorflow to build the neural networks. Tensorflow will create the input and output layers of our machine learning model.
In this assignment, we will build a deep neural network that functions as part of an end to end machine translation pipeline. The completed pipeline will accept English text as input and return the french translation.
Virus mnist is a multiclass classification project where we have used virus mnist dataset. It has 10 classes of computer virus. CNN has been used to solve the problem and you will also go through the research paper of virus mnist detection using link provided in the project.
In this project, we will forecast furniture sales using time series techniques learnt in our assignments like sarimax, AR model etc.
In this project, you will implement GANs application on Anime dataset. You will generate fake hand writings using DCGAN and also generate fake anime images using styleGAN pretrained model.
We will demonstrate how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's. Given a sequence of characters from this data ("Shakespeare"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.
After Course Completion we will help you with referrals if there is any matching opportunity in companies like:
The course is designed in the form of Assignments. In Assignments we have topic wise video links followed by related questions. You are supposed to code the solutions.
If you are stuck somewhere then you can reach to our mentors. They are available from 3 P.M to 11:59 P.M everyday.
After completion you need to submit the assignment and you will receive the solution file.
No. There is no placement as part of the course. The course focuses on skill development for being able to clear Data science interviews.
It’s one time payment and you will get lifetime access to this course experience.
No, We do not organize live online classes, instead we collect topic wise best videos from different sources and bring it together for our learners.
Our mentors team comprises of M.tech and B.tech students with good technical skill to resolve your course and assignments related queries