The #66DaysOfData challenge is designed to help create data science/Engineering/analytics and machine learning habits.
Along with the habits of the community, there is this interesting opportunity to join other incredible communities where you can learn alongside and work with other like-minded individuals, practically in a worldwide spectrum.
The challenge is straightforward with two parts:
The latter has 3 benefits:
With the ever-increasing demand for big data solutions, it becomes a necessity to practically leverage technology potentials for a scalable advantage. This article speaks to creating an AWS Redshift Cluster in Python.
A major advantage is to run machine learning models at scale on s3 and Redshift as such:
In this project, we apply unsupervised learning techniques to identify segments of the population that represent the core customer base for a mail-order sales company in Germany. We also adopt supervised learning techniques on partitioned and labeled data to identify and predict if a sample from the population is a customer or not. Real-life data is provided by Bertelsmann Arvato Analytics, from which current insights are collected to provide accurate metrics. The goal of this project is to predict which segments of the population would most likely reach out to the company, after the reception of a corresponding mail order.
A report to Seattle Airbnb key business questions via the CRISP-DM approach on 2016 dataset.
With current competitive markets, there are always a number of business questions that come into play when attempting to optimize on key decisions. Using the Cross-Industry Standard Process of Data Mining (CRISP-DM), the Seattle Airbnb dataset is collected, cleaned and engineered, such that a good number of business insights are gathered, of which the following five questions are focused upon:
Two posts before this,(here) we worked on creating and transforming custom images to tensors, for training and testing Neural Networks on Pytorch framework; next, we built a Neural Network from an existing model, then made use of transfer learning with Pytorch, to train and test our dataset. In this post, we would give six experienced advantages of the docker, and how It would be a useful platform to have for fort coming projects. These qualities would progressively show up for improved teamwork, collaboration, and deployment from an enterprise standpoint.
The Docker is the world’s leading software containerization platform. It is…
In the previous post (here), we loaded and transformed custom images from a directory of training and validation datasets into appropriately processed Tensors; now we are ready to load, modify, train and test an existing model with our readymade data, in four steps:
There are a variety of existing Neural Networks(NN), trained on vast amounts of datasets such as Imagenet, Kaggle and the UCI repository just to state a few. …
Hello everyone, I am a Data Science enthusiast, striving at learning by all means, every methods used to gain insights from current data in any sector and produce quality results, for the betterment of society in time.
I started learning Pytorch a week ago and found it important writing this blog, for people having tough times loading custom made data (Images) into a pretrained Neural Network. This blog post solves the problem, using DataLoader api from Pytorch in five main steps.
1. Setting up your directory
- Save your images in two separate folders, one for training (../traindata) …
Hello everyone, I am a Data Science enthusiast, striving at learning by all means, every method used to gain insights from current data in any sector and produce quality results, for the betterment of society in time.
I started learning Pytorch a week ago and found it important writing this blog, for people having tough times loading custom made data (Images) into a pretrained Neural Network. This blog post solves the problem, using DataLoader api from Pytorch in four main steps.
1. Setting up your directory
- Save your images into two separate folders, one for training (../traindata) …