Image source (link)

The #66DaysOfData​ challenge is designed to help create data science/Engineering/analytics and machine learning habits.

Along with the habits of the community, there is this interesting opportunity to join other incredible communities where you can learn alongside and work with other like-minded individuals, practically in a worldwide spectrum.

The challenge is straightforward with two parts:

  1. Learn related topics every day for 66 days straight; with a minimum of 5 minutes per day.
  2. Share your progress on your social media platform of choice using #66DaysOfData​.

The latter has 3 benefits:

  1. You create a habit of daily learning which is integral to your…

With the ever-increasing demand for big data solutions, it becomes a necessity to practically leverage technology potentials for a scalable advantage. This article speaks to creating an AWS Redshift Cluster in Python.

https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.dremio.com%2Ftutorials%2Fbuilding-machine-learning-models-on-s3-and-red
https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.dremio.com%2Ftutorials%2Fbuilding-machine-learning-models-on-s3-and-red
Link

A major advantage is to run machine learning models at scale on s3 and Redshift as such:


Arvato Financial Solutions

Ideation Architecture

Introduction

In this project, we apply unsupervised learning techniques to identify segments of the population that represent the core customer base for a mail-order sales company in Germany. We also adopt supervised learning techniques on partitioned and labeled data to identify and predict if a sample from the population is a customer or not. Real-life data is provided by Bertelsmann Arvato Analytics, from which current insights are collected to provide accurate metrics. The goal of this project is to predict which segments of the population would most likely reach out to the company, after the reception of a corresponding mail order.

Preamble


A report to Seattle Airbnb key business questions via the CRISP-DM approach on 2016 dataset.

Seattle Airbnbs day show

With current competitive markets, there are always a number of business questions that come into play when attempting to optimize on key decisions. Using the Cross-Industry Standard Process of Data Mining (CRISP-DM), the Seattle Airbnb dataset is collected, cleaned and engineered, such that a good number of business insights are gathered, of which the following five questions are focused upon:

  • How many Airbnb listings does each neighborhood in Seattle posses?
  • Which are the neighborhoods with the most expensive listings?
  • Which listings were available on December…


Two posts before this,(here) we worked on creating and transforming custom images to tensors, for training and testing Neural Networks on Pytorch framework; next, we built a Neural Network from an existing model, then made use of transfer learning with Pytorch, to train and test our dataset. In this post, we would give six experienced advantages of the docker, and how It would be a useful platform to have for fort coming projects. These qualities would progressively show up for improved teamwork, collaboration, and deployment from an enterprise standpoint.

The Docker is the world’s leading software containerization platform. It is…


In the previous post (here), we loaded and transformed custom images from a directory of training and validation datasets into appropriately processed Tensors; now we are ready to load, modify, train and test an existing model with our readymade data, in four steps:

  • Loading a Neural Network model
  • Building the classifier and training the network
  • Testing the modified network
  • Saving the checkpoint

Loading the Neural Network model

There are a variety of existing Neural Networks(NN), trained on vast amounts of datasets such as Imagenet, Kaggle and the UCI repository just to state a few. …


Hello everyone, I am a Data Science enthusiast, striving at learning by all means, every methods used to gain insights from current data in any sector and produce quality results, for the betterment of society in time.

I started learning Pytorch a week ago and found it important writing this blog, for people having tough times loading custom made data (Images) into a pretrained Neural Network. This blog post solves the problem, using DataLoader api from Pytorch in five main steps.

1. Setting up your directory

- Save your images in two separate folders, one for training (../traindata) …


Hello everyone, I am a Data Science enthusiast, striving at learning by all means, every method used to gain insights from current data in any sector and produce quality results, for the betterment of society in time.

I started learning Pytorch a week ago and found it important writing this blog, for people having tough times loading custom made data (Images) into a pretrained Neural Network. This blog post solves the problem, using DataLoader api from Pytorch in four main steps.

1. Setting up your directory

- Save your images into two separate folders, one for training (../traindata) …

Tsaku Nelson

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store