Objective: -
Employees are considered the backbone of an organization. The success or failure of the organization depends on the employees who work for an organization. Organizations must deal with the problems when trained, skilled and experienced employees leave the organization for better opportunities.
Nowadays, firms are expanding at a tremendous rate, and with this mass expansion, experienced professionals are in high demand by the companies. An experienced employee is like an asset to the company; upon losing, companies either try to retain the employee with a revised compensation, or they can always hire a new employee. However, predicting this can save a lot of money and time. Additionally, it will allow the company’s management to control a project pipeline efficiently, enabling them to manage the hiring and existing workforce flexibly.
Employee attrition is downsizing in any organization where employees resign. Employees are valuable assets of any organization. It is necessary to know whether the employees are dissatisfied or whether there are other reasons for leaving their respective jobs.
The goal of this challenge is to build a machine learning model that helps a company to know employee attrition.
Step 1: Import all the required libraries
Pandas : In computer programming, pandas is a software library written for the Python programming language for data manipulation and analysis and storing in a proper way. In particular, it offers data structures and operations for manipulating numerical tables and time series
Sklearn : Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. The library is built upon the SciPy (Scientific Python) that must be installed before you can use scikit-learn.
Pickle : Python pickle module is used for serializing and de-serializing a Python object structure. Pickling is a way to convert a python object (list, dict, etc.) into a character stream. The idea is that this character stream contains all the information necessary to reconstruct the object in another python script.
Seaborn : Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.
Matplotlib : Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK.
#Loading libraries
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
import sklearn.linear_model
import sklearn
import pickle
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import OneHotEncoder
from matplotlib.pyplot import figure
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_percentage_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.preprocessing import scale
from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, LassoCV
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.decomposition import PCA
import warnings
warnings.filterwarnings('ignore')
Step 2 : Read dataset and basic details of dataset
Goal:- In this step we are going to read the dataset, view the dataset and analysis the basic details like total number of rows and columns, what are the column data types and see to need to create new column or not.
In this stage we are going to read our problem dataset and have a look on it.
#loading the dataset
try:
df = pd.read_csv('C:/Users/YAJENDRA/Documents/final notebooks/Employee Attrition Prediction/Data/data.csv') #Path for the file
print('Data read done successfully...')
except (FileNotFoundError, IOError):
print("Wrong file or file path")
Data read done successfully...
# To view the content inside the dataset we can use the head() method that returns a specified number of rows, string from the top.
# The head() method returns the first 5 rows if a number is not specified.
df.head()
Dataset: -
The dataset used in this model is available at Kaggle.
Attribute Information:
- Attrition
Other features are:
Age
BusinessTravel
DailyRate
Department
DistanceFromHome
Education
EducationField
EmployeeCount
EmployeeNumber
EnvironmentSatisfaction
Gender
HourlyRate
JobInvolvement
JobLevel
JobRole
JobSatisfaction
MaritalStatus
MonthlyIncome
MonthlyRate
NumCompaniesWorked
Over18
OverTime
PercentSalaryHike
PerformanceRating
RelationshipSatisfaction
StandardHours
StockOptionLevel
TotalWorkingYears
TrainingTimesLastYear
WorkLifeBalance
YearsAtCompany
YearsInCurrentRole
YearsSinceLastPromotion
YearsWithCurrManager
Step3: Data Preprocessing
Why need of Data Preprocessing?
Preprocessing data is an important step for data analysis. The following are some benefits of preprocessing data:
It improves accuracy and reliability. Preprocessing data removes missing or inconsistent data values resulting from human or computer error, which can improve the accuracy and quality of a dataset, making it more reliable.
It makes data consistent. When collecting data, it’s possible to have data duplicates, and discarding them during preprocessing can ensure the data values for analysis are consistent, which helps produce accurate results.
It increases the data’s algorithm readability. Preprocessing enhances the data’s quality and makes it easier for machine learning algorithms to read, use, and interpret it.
After we read the data, we can look at the data using:
# count the total number of rows and columns.
print ('The train data has {0} rows and {1} columns'.format(df.shape[0],df.shape[1]))
The train data has 1470 rows and 35 columns
By analysing the problem statement and the dataset, we get to know that the target variable is “Attrition” column which says if whether employee stay or not. Yes means the employee stay and no means employe leave the company.
df['Attrition'].value_counts()
No 1233
Yes 237
Name: Attrition, dtype: int64
features = ['Attrition']
plt.subplots(figsize=(20, 10))
for i, col in enumerate(features):
plt.subplot(1, 3, i + 1)
x = df[col].value_counts()
plt.pie(x.values,
labels=x.index,
autopct='%1.1f%%')
plt.title('Attrition',fontsize=20)
plt.show()
The df.value_counts() method counts the number of types of values a particular column contains.
df.shape
(1470, 35)
The df.shape method shows the shape of the dataset.
We can identify that out of the 1470 employees, 1233 are labeled as 0 and 237 as 1.
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1470 entries, 0 to 1469
Data columns (total 35 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Age 1470 non-null int64
1 Attrition 1470 non-null object
2 BusinessTravel 1470 non-null object
3 DailyRate 1470 non-null int64
4 Department 1470 non-null object
5 DistanceFromHome 1470 non-null int64
6 Education 1470 non-null int64
7 EducationField 1470 non-null object
8 EmployeeCount 1470 non-null int64
9 EmployeeNumber 1470 non-null int64
10 EnvironmentSatisfaction 1470 non-null int64
11 Gender 1470 non-null object
12 HourlyRate 1470 non-null int64
13 JobInvolvement 1470 non-null int64
14 JobLevel 1470 non-null int64
15 JobRole 1470 non-null object
16 JobSatisfaction 1470 non-null int64
17 MaritalStatus 1470 non-null object
18 MonthlyIncome 1470 non-null int64
19 MonthlyRate 1470 non-null int64
20 NumCompaniesWorked 1470 non-null int64
21 Over18 1470 non-null object
22 OverTime 1470 non-null object
23 PercentSalaryHike 1470 non-null int64
24 PerformanceRating 1470 non-null int64
25 RelationshipSatisfaction 1470 non-null int64
26 StandardHours 1470 non-null int64
27 StockOptionLevel 1470 non-null int64
28 TotalWorkingYears 1470 non-null int64
29 TrainingTimesLastYear 1470 non-null int64
30 WorkLifeBalance 1470 non-null int64
31 YearsAtCompany 1470 non-null int64
32 YearsInCurrentRole 1470 non-null int64
33 YearsSinceLastPromotion 1470 non-null int64
34 YearsWithCurrManager 1470 non-null int64
dtypes: int64(26), object(9)
memory usage: 402.1+ KB
The df.info() method prints information about a DataFrame including the index dtype and columns, non-null values and memory usage.
df.iloc[1]
Age 49
Attrition No
BusinessTravel Travel_Frequently
DailyRate 279
Department Research & Development
DistanceFromHome 8
Education 1
EducationField Life Sciences
EmployeeCount 1
EmployeeNumber 2
EnvironmentSatisfaction 3
Gender Male
HourlyRate 61
JobInvolvement 2
JobLevel 2
JobRole Research Scientist
JobSatisfaction 2
MaritalStatus Married
MonthlyIncome 5130
MonthlyRate 24907
NumCompaniesWorked 1
Over18 Y
OverTime No
PercentSalaryHike 23
PerformanceRating 4
RelationshipSatisfaction 4
StandardHours 80
StockOptionLevel 1
TotalWorkingYears 10
TrainingTimesLastYear 3
WorkLifeBalance 3
YearsAtCompany 10
YearsInCurrentRole 7
YearsSinceLastPromotion 1
YearsWithCurrManager 7
Name: 1, dtype: object
df.iloc[ ] is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a boolean array. The iloc property gets, or sets, the value(s) of the specified indexes.
Data Type Check for every column
Why data type check is required?
Data type check helps us with understanding what type of variables our dataset contains. It helps us with identifying whether to keep that variable or not. If the dataset contains contiguous data, then only float and integer type variables will be beneficial and if we have to classify any value then categorical variables will be beneficial.
objects_cols = ['object']
objects_lst = list(df.select_dtypes(include=objects_cols).columns)
print("Total number of categorical columns are ", len(objects_lst))
print("There names are as follows: ", objects_lst)
Total number of categorical columns are 9
There names are as follows: ['Attrition', 'BusinessTravel', 'Department', 'EducationField', 'Gender', 'JobRole', 'MaritalStatus', 'Over18', 'OverTime']
int64_cols = ['int64']
int64_lst = list(df.select_dtypes(include=int64_cols).columns)
print("Total number of numerical columns are ", len(int64_lst))
print("There names are as follows: ", int64_lst)
Total number of numerical columns are 26
There names are as follows: ['Age', 'DailyRate', 'DistanceFromHome', 'Education', 'EmployeeCount', 'EmployeeNumber', 'EnvironmentSatisfaction', 'HourlyRate', 'JobInvolvement', 'JobLevel', 'JobSatisfaction', 'MonthlyIncome', 'MonthlyRate', 'NumCompaniesWorked', 'PercentSalaryHike', 'PerformanceRating', 'RelationshipSatisfaction', 'StandardHours', 'StockOptionLevel', 'TotalWorkingYears', 'TrainingTimesLastYear', 'WorkLifeBalance', 'YearsAtCompany', 'YearsInCurrentRole', 'YearsSinceLastPromotion', 'YearsWithCurrManager']
Step 2 Insights: -
- We have total 35 features where 26 of them are integer type and 9 are object type.
After this step we have to calculate various evaluation parameters which will help us in cleaning and analysing the data more accurately.
Step 3: Descriptive Analysis
Goal/Purpose: Finding the data distribution of the features. Visualization helps to understand data and also to explain the data to another person.
Things we are going to do in this step:
Mean
Median
Mode
Standard Deviation
Variance
Null Values
NaN Values
Min value
Max value
Count Value
Quatilers
Correlation
Skewness
df.describe()
The df.describe() method returns description of the data in the DataFrame. If the DataFrame contains numerical data, the description contains these information for each column: count — The number of not-empty values. mean — The average (mean) value.
Measure the variability of data of the dataset
Variability describes how far apart data points lie from each other and from the center of a distribution.
1. Standard Deviation
The standard deviation is the average amount of variability in your dataset.
It tells you, on average, how far each data point lies from the mean. The larger the standard deviation, the more variable the data set is and if zero variance then there is no variability in the dataset that means there no use of that dataset.
So, it helps in understanding the measurements when the data is distributed. The more the data is distributed, the greater will be the standard deviation of that data.Here, you as an individual can determine which company is beneficial in long term. But, if you didn’t know the SD you would have choosen a wrong compnay for you.
df.std()
Age 9.135373e+00
DailyRate 4.035091e+02
DistanceFromHome 8.106864e+00
Education 1.024165e+00
EmployeeCount 1.110601e-16
EmployeeNumber 6.020243e+02
EnvironmentSatisfaction 1.093082e+00
HourlyRate 2.032943e+01
JobInvolvement 7.115611e-01
JobLevel 1.106940e+00
JobSatisfaction 1.102846e+00
MonthlyIncome 4.707957e+03
MonthlyRate 7.117786e+03
NumCompaniesWorked 2.498009e+00
PercentSalaryHike 3.659938e+00
PerformanceRating 3.608235e-01
RelationshipSatisfaction 1.081209e+00
StandardHours 0.000000e+00
StockOptionLevel 8.520767e-01
TotalWorkingYears 7.780782e+00
TrainingTimesLastYear 1.289271e+00
WorkLifeBalance 7.064758e-01
YearsAtCompany 6.126525e+00
YearsInCurrentRole 3.623137e+00
YearsSinceLastPromotion 3.222430e+00
YearsWithCurrManager 3.568136e+00
dtype: float64
We can also understand the standard deviation using the below function.
def std_cal(df,float64_lst):
cols = ['normal_value', 'zero_value']
zero_value = 0
normal_value = 0
for value in float64_lst:
rs = round(df[value].std(),6)
if rs > 0:
normal_value = normal_value + 1
elif rs == 0:
zero_value = zero_value + 1
std_total_df = pd.DataFrame([[normal_value, zero_value]], columns=cols)
return std_total_df
int64_cols = ['int64']
int64_lst = list(df.select_dtypes(include=int64_cols).columns)
std_cal(df,int64_lst)
zero_value -> is the zero variance and when then there is no variability in the dataset that means there no use of that dataset.
2. Variance
The variance is the average of squared deviations from the mean. A deviation from the mean is how far a score lies from the mean.
Variance is the square of the standard deviation. This means that the units of variance are much larger than those of a typical value of a data set.
Why do we used Variance ?
By Squairng the number we get non-negative computation i.e. Disperson cannot be negative. The presence of variance is very important in your dataset because this will allow the model to learn about the different patterns hidden in the data
df.var()
Age 8.345505e+01
DailyRate 1.628196e+05
DistanceFromHome 6.572125e+01
Education 1.048914e+00
EmployeeCount 1.233434e-32
EmployeeNumber 3.624333e+05
EnvironmentSatisfaction 1.194829e+00
HourlyRate 4.132856e+02
JobInvolvement 5.063193e-01
JobLevel 1.225316e+00
JobSatisfaction 1.216270e+00
MonthlyIncome 2.216486e+07
MonthlyRate 5.066288e+07
NumCompaniesWorked 6.240049e+00
PercentSalaryHike 1.339514e+01
PerformanceRating 1.301936e-01
RelationshipSatisfaction 1.169013e+00
StandardHours 0.000000e+00
StockOptionLevel 7.260346e-01
TotalWorkingYears 6.054056e+01
TrainingTimesLastYear 1.662219e+00
WorkLifeBalance 4.991081e-01
YearsAtCompany 3.753431e+01
YearsInCurrentRole 1.312712e+01
YearsSinceLastPromotion 1.038406e+01
YearsWithCurrManager 1.273160e+01
dtype: float64
We can also understand the Variance using the below function.
zero_cols = []
def var_cal(df,float64_lst):
cols = ['normal_value', 'zero_value']
zero_value = 0
normal_value = 0
for value in float64_lst:
rs = round(df[value].var(),6)
if rs > 0:
normal_value = normal_value + 1
elif rs == 0:
zero_value = zero_value + 1
zero_cols.append(value)
var_total_df = pd.DataFrame([[normal_value, zero_value]], columns=cols)
return var_total_df
var_cal(df, int64_lst)
zero_value -> Zero variance means that there is no difference in the data values, which means that they are all the same.
Measure central tendency
A measure of central tendency is a single value that attempts to describe a set of data by identifying the central position within that set of data. As such, measures of central tendency are sometimes called measures of central location. They are also classed as summary statistics.
Mean — The average value. Median — The mid point value. Mode — The most common value.
1. Mean
The mean is the arithmetic average, and it is probably the measure of central tendency that you are most familiar.
Why do we calculate mean?
The mean is used to summarize a data set. It is a measure of the center of a data set.
df.mean()
Age 36.923810
DailyRate 802.485714
DistanceFromHome 9.192517
Education 2.912925
EmployeeCount 1.000000
EmployeeNumber 1024.865306
EnvironmentSatisfaction 2.721769
HourlyRate 65.891156
JobInvolvement 2.729932
JobLevel 2.063946
JobSatisfaction 2.728571
MonthlyIncome 6502.931293
MonthlyRate 14313.103401
NumCompaniesWorked 2.693197
PercentSalaryHike 15.209524
PerformanceRating 3.153741
RelationshipSatisfaction 2.712245
StandardHours 80.000000
StockOptionLevel 0.793878
TotalWorkingYears 11.279592
TrainingTimesLastYear 2.799320
WorkLifeBalance 2.761224
YearsAtCompany 7.008163
YearsInCurrentRole 4.229252
YearsSinceLastPromotion 2.187755
YearsWithCurrManager 4.123129
dtype: float64
We can also understand the mean using the below function.
def mean_cal(df,int64_lst):
cols = ['normal_value', 'zero_value']
zero_value = 0
normal_value = 0
for value in int64_lst:
rs = round(df[value].mean(),6)
if rs > 0:
normal_value = normal_value + 1
elif rs == 0:
zero_value = zero_value + 1
mean_total_df = pd.DataFrame([[normal_value, zero_value]], columns=cols)
return mean_total_df
mean_cal(df, int64_lst)
zero_value -> that the mean of a paticular column is zero, which isn’t usefull in anyway and need to be drop.
2.Median
The median is the middle value. It is the value that splits the dataset in half.The median of a dataset is the value that, assuming the dataset is ordered from smallest to largest, falls in the middle. If there are an even number of values in a dataset, the middle two values are the median.
Why do we calculate median ?
By comparing the median to the mean, you can get an idea of the distribution of a dataset. When the mean and the median are the same, the dataset is more or less evenly distributed from the lowest to highest values.
df.median()
Age 36.0
DailyRate 802.0
DistanceFromHome 7.0
Education 3.0
EmployeeCount 1.0
EmployeeNumber 1020.5
EnvironmentSatisfaction 3.0
HourlyRate 66.0
JobInvolvement 3.0
JobLevel 2.0
JobSatisfaction 3.0
MonthlyIncome 4919.0
MonthlyRate 14235.5
NumCompaniesWorked 2.0
PercentSalaryHike 14.0
PerformanceRating 3.0
RelationshipSatisfaction 3.0
StandardHours 80.0
StockOptionLevel 1.0
TotalWorkingYears 10.0
TrainingTimesLastYear 3.0
WorkLifeBalance 3.0
YearsAtCompany 5.0
YearsInCurrentRole 3.0
YearsSinceLastPromotion 1.0
YearsWithCurrManager 3.0
dtype: float64
We can also understand the median using the below function.
def median_cal(df,int64_lst):
cols = ['normal_value', 'zero_value']
zero_value = 0
normal_value = 0
for value in int64_lst:
rs = round(df[value].mean(),6)
if rs > 0:
normal_value = normal_value + 1
elif rs == 0:
zero_value = zero_value + 1
median_total_df = pd.DataFrame([[normal_value, zero_value]], columns=cols)
return median_total_df
median_cal(df, int64_lst)
zero_value -> that the median of a paticular column is zero which isn’t usefull in anyway and need to be drop.
3. Mode
The mode is the value that occurs the most frequently in your data set. On a bar chart, the mode is the highest bar. If the data have multiple values that are tied for occurring the most frequently, you have a multimodal distribution. If no value repeats, the data do not have a mode.
Why do we calculate mode ?
The mode can be used to summarize categorical variables, while the mean and median can be calculated only for numeric variables. This is the main advantage of the mode as a measure of central tendency. It’s also useful for discrete variables and for continuous variables when they are expressed as intervals.
df.mode()
def mode_cal(df,int64_lst):
cols = ['normal_value', 'zero_value', 'string_value']
zero_value = 0
normal_value = 0
string_value = 0
for value in int64_lst:
rs = df[value].mode()[0]
if isinstance(rs, str):
string_value = string_value + 1
else:
if rs > 0:
normal_value = normal_value + 1
elif rs == 0:
zero_value = zero_value + 1
mode_total_df = pd.DataFrame([[normal_value, zero_value, string_value]], columns=cols)
return mode_total_df
mode_cal(df, list(df.columns))
zero_value -> that the mode of a paticular column is zero which isn’t usefull in anyway and need to be drop.
Null and Nan values
- Null Values
A null value in a relational database is used when the value in a column is unknown or missing. A null is neither an empty string (for character or datetime data types) nor a zero value (for numeric data types).
df.isnull().sum()
Age 0
Attrition 0
BusinessTravel 0
DailyRate 0
Department 0
DistanceFromHome 0
Education 0
EducationField 0
EmployeeCount 0
EmployeeNumber 0
EnvironmentSatisfaction 0
Gender 0
HourlyRate 0
JobInvolvement 0
JobLevel 0
JobRole 0
JobSatisfaction 0
MaritalStatus 0
MonthlyIncome 0
MonthlyRate 0
NumCompaniesWorked 0
Over18 0
OverTime 0
PercentSalaryHike 0
PerformanceRating 0
RelationshipSatisfaction 0
StandardHours 0
StockOptionLevel 0
TotalWorkingYears 0
TrainingTimesLastYear 0
WorkLifeBalance 0
YearsAtCompany 0
YearsInCurrentRole 0
YearsSinceLastPromotion 0
YearsWithCurrManager 0
dtype: int64
As we notice that there are no null values in our dataset.
- Nan Values
NaN, standing for Not a Number, is a member of a numeric data type that can be interpreted as a value that is undefined or unrepresentable, especially in floating-point arithmetic.
df.isna().sum()
Age 0
Attrition 0
BusinessTravel 0
DailyRate 0
Department 0
DistanceFromHome 0
Education 0
EducationField 0
EmployeeCount 0
EmployeeNumber 0
EnvironmentSatisfaction 0
Gender 0
HourlyRate 0
JobInvolvement 0
JobLevel 0
JobRole 0
JobSatisfaction 0
MaritalStatus 0
MonthlyIncome 0
MonthlyRate 0
NumCompaniesWorked 0
Over18 0
OverTime 0
PercentSalaryHike 0
PerformanceRating 0
RelationshipSatisfaction 0
StandardHours 0
StockOptionLevel 0
TotalWorkingYears 0
TrainingTimesLastYear 0
WorkLifeBalance 0
YearsAtCompany 0
YearsInCurrentRole 0
YearsSinceLastPromotion 0
YearsWithCurrManager 0
dtype: int64
As we notice that there are no nan values in our dataset.
Another way to remove null and nan values is to use the method “df.dropna(inplace=True)”.
Count of unique occurences of every value in all categorical value
for value in objects_lst:
print(f"{value:{10}} {df[value].value_counts()}")
Attrition No 1233
Yes 237
Name: Attrition, dtype: int64
BusinessTravel Travel_Rarely 1043
Travel_Frequently 277
Non-Travel 150
Name: BusinessTravel, dtype: int64
Department Research & Development 961
Sales 446
Human Resources 63
Name: Department, dtype: int64
EducationField Life Sciences 606
Medical 464
Marketing 159
Technical Degree 132
Other 82
Human Resources 27
Name: EducationField, dtype: int64
Gender Male 882
Female 588
Name: Gender, dtype: int64
JobRole Sales Executive 326
Research Scientist 292
Laboratory Technician 259
Manufacturing Director 145
Healthcare Representative 131
Manager 102
Sales Representative 83
Research Director 80
Human Resources 52
Name: JobRole, dtype: int64
MaritalStatus Married 673
Single 470
Divorced 327
Name: MaritalStatus, dtype: int64
Over18 Y 1470
Name: Over18, dtype: int64
OverTime No 1054
Yes 416
Name: OverTime, dtype: int64
Categorical data are variables that contain label values rather than numeric values.The number of possible values is often limited to a fixed set.
Use Label Encoder to label the categorical data. Label Encoder is the part of SciKit Learn library in Python and used to convert categorical data, or text data, into numbers, which our predictive models can better understand.
Label Encoding refers to converting the labels into a numeric form so as to convert them into the machine-readable form. Machine learning algorithms can then decide in a better way how those labels must be operated. It is an important pre-processing step for the structured dataset in supervised learning.
#Before Encoding
for i in objects_lst:
print(i)
print()
print(df[i])
print("-------------------------------------------------")
print()
Attrition
0 Yes
1 No
2 Yes
3 No
4 No
...
1465 No
1466 No
1467 No
1468 No
1469 No
Name: Attrition, Length: 1470, dtype: object
-------------------------------------------------
BusinessTravel
0 Travel_Rarely
1 Travel_Frequently
2 Travel_Rarely
3 Travel_Frequently
4 Travel_Rarely
...
1465 Travel_Frequently
1466 Travel_Rarely
1467 Travel_Rarely
1468 Travel_Frequently
1469 Travel_Rarely
Name: BusinessTravel, Length: 1470, dtype: object
-------------------------------------------------
Department
0 Sales
1 Research & Development
2 Research & Development
3 Research & Development
4 Research & Development
...
1465 Research & Development
1466 Research & Development
1467 Research & Development
1468 Sales
1469 Research & Development
Name: Department, Length: 1470, dtype: object
-------------------------------------------------
EducationField
0 Life Sciences
1 Life Sciences
2 Other
3 Life Sciences
4 Medical
...
1465 Medical
1466 Medical
1467 Life Sciences
1468 Medical
1469 Medical
Name: EducationField, Length: 1470, dtype: object
-------------------------------------------------
Gender
0 Female
1 Male
2 Male
3 Female
4 Male
...
1465 Male
1466 Male
1467 Male
1468 Male
1469 Male
Name: Gender, Length: 1470, dtype: object
-------------------------------------------------
JobRole
0 Sales Executive
1 Research Scientist
2 Laboratory Technician
3 Research Scientist
4 Laboratory Technician
...
1465 Laboratory Technician
1466 Healthcare Representative
1467 Manufacturing Director
1468 Sales Executive
1469 Laboratory Technician
Name: JobRole, Length: 1470, dtype: object
-------------------------------------------------
MaritalStatus
0 Single
1 Married
2 Single
3 Married
4 Married
...
1465 Married
1466 Married
1467 Married
1468 Married
1469 Married
Name: MaritalStatus, Length: 1470, dtype: object
-------------------------------------------------
Over18
0 Y
1 Y
2 Y
3 Y
4 Y
..
1465 Y
1466 Y
1467 Y
1468 Y
1469 Y
Name: Over18, Length: 1470, dtype: object
-------------------------------------------------
OverTime
0 Yes
1 No
2 Yes
3 Yes
4 No
...
1465 No
1466 No
1467 Yes
1468 No
1469 No
Name: OverTime, Length: 1470, dtype: object
-------------------------------------------------
#Encoding categorical data values
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
for i in objects_lst:
df[i] = le.fit_transform(df[i])
#After encoding or converting categorical col values into numbers
for i in objects_lst:
print(i)
print(df[i])
Attrition
0 1
1 0
2 1
3 0
4 0
..
1465 0
1466 0
1467 0
1468 0
1469 0
Name: Attrition, Length: 1470, dtype: int32
BusinessTravel
0 2
1 1
2 2
3 1
4 2
..
1465 1
1466 2
1467 2
1468 1
1469 2
Name: BusinessTravel, Length: 1470, dtype: int32
Department
0 2
1 1
2 1
3 1
4 1
..
1465 1
1466 1
1467 1
1468 2
1469 1
Name: Department, Length: 1470, dtype: int32
EducationField
0 1
1 1
2 4
3 1
4 3
..
1465 3
1466 3
1467 1
1468 3
1469 3
Name: EducationField, Length: 1470, dtype: int32
Gender
0 0
1 1
2 1
3 0
4 1
..
1465 1
1466 1
1467 1
1468 1
1469 1
Name: Gender, Length: 1470, dtype: int32
JobRole
0 7
1 6
2 2
3 6
4 2
..
1465 2
1466 0
1467 4
1468 7
1469 2
Name: JobRole, Length: 1470, dtype: int32
MaritalStatus
0 2
1 1
2 2
3 1
4 1
..
1465 1
1466 1
1467 1
1468 1
1469 1
Name: MaritalStatus, Length: 1470, dtype: int32
Over18
0 0
1 0
2 0
3 0
4 0
..
1465 0
1466 0
1467 0
1468 0
1469 0
Name: Over18, Length: 1470, dtype: int32
OverTime
0 1
1 0
2 1
3 1
4 0
..
1465 0
1466 0
1467 1
1468 0
1469 0
Name: OverTime, Length: 1470, dtype: int32
1 ~ Yes, 0 ~ No
Skewness
Skewness is a measure of the asymmetry of a distribution. A distribution is asymmetrical when its left and right side are not mirror images. A distribution can have right (or positive), left (or negative), or zero skewness
Why do we calculate Skewness ?
Skewness gives the direction of the outliers if it is right-skewed, most of the outliers are present on the right side of the distribution while if it is left-skewed, most of the outliers will present on the left side of the distribution
Below is the function to calculate skewness.
def right_nor_left(df, int64_lst):
temp_skewness = ['column', 'skewness_value', 'skewness (+ve or -ve)']
temp_skewness_values = []
temp_total = ["positive (+ve) skewed", "normal distrbution" , "negative (-ve) skewed"]
positive = 0
negative = 0
normal = 0
for value in int64_lst:
rs = round(df[value].skew(),4)
if rs > 0:
temp_skewness_values.append([value,rs , "positive (+ve) skewed"])
positive = positive + 1
elif rs == 0:
temp_skewness_values.append([value,rs,"normal distrbution"])
normal = normal + 1
elif rs < 0:
temp_skewness_values.append([value,rs, "negative (-ve) skewed"])
negative = negative + 1
skewness_df = pd.DataFrame(temp_skewness_values, columns=temp_skewness)
skewness_total_df = pd.DataFrame([[positive, normal, negative]], columns=temp_total)
return skewness_df, skewness_total_df
int64_cols = ['int64','int32']
int64_lst_col = list(df.select_dtypes(include=int64_cols).columns)
skew_df,skew_total_df = right_nor_left(df, int64_lst_col)
skew_df
skew_total_df
We notice with the above results that we have following details:
20 columns are positive skewed
12 columns are negative skewed
3 columns are normal skewed
Step 3 Insights: -
With the statistical analysis we have found that the data have a lot of skewness in them all the columns are positively skewed with mostly zero variance.
Statistical analysis is little difficult to understand at one glance so to make it more understandable we will perform visulatization on the data which will help us to understand the process easily.
Why we are calculating all these metrics?
Mean / Median /Mode/ Variance /Standard Deviation are all very basic but very important concept of statistics used in data science. Almost all the machine learning algorithm uses these concepts in data preprocessing steps. These concepts are part of descriptive statistics where we basically used to describe and understand the data for features in Machine learning
Need of Employee Attrition prediction: -
Managing workforce: If the supervisors or HR came to know about some employees that they will be planning to leave the company then they could get in touch with those employees which can help them to stay back or they can manage the workforce by hiring the new alternative of those employees.
Smooth pipeline: If all the employees in the current project are working continuously on a project then the pipeline of that project will be smooth but if suppose one efficient asset of the project(employee) suddenly leave that company then the workflow will be not so smooth.
Hiring Management: If HR of one particular project came to know about the employee who is willing to leave the company then he/she can manage the number of hiring and they can get the valuable asset whenever they need so for the efficient flow of work.
Step 4: Data Exploration
Goal/Purpose:
Graphs we are going to develop in this step
Histogram of all columns to check the distrubution of the columns
Distplot or distribution plot of all columns to check the variation in the data distribution
Heatmap to calculate correlation within feature variables
Boxplot to find out outlier in the feature columns
1. Histogram
A histogram is a bar graph-like representation of data that buckets a range of classes into columns along the horizontal x-axis.The vertical y-axis represents the number count or percentage of occurrences in the data for each column
# Distribution in attributes
%matplotlib inline
import matplotlib.pyplot as plt
df.hist(bins=50, figsize=(20,20))
plt.show()
Histogram Insight: -
Histogram helps in identifying the following:
View the shape of your data set’s distribution to look for outliers or other significant data points.
Determine whether something significant has boccurred from one time period to another.
Why Histogram?
It is used to illustrate the major features of the distribution of the data in a convenient form. It is also useful when dealing with large data sets (greater than 100 observations). It can help detect any unusual observations (outliers) or any gaps in the data.
From the above graphical representation we can identify that the highest bar represents the outliers which is above the maximum range.
We can also identify that the values are moving on the right side, which determines positive and the centered values determines normal skewness.
2. Distplot
A Distplot or distribution plot, depicts the variation in the data distribution. Seaborn Distplot represents the overall distribution of continuous data variables. The Seaborn module along with the Matplotlib module is used to depict the distplot with different variations in it
num = [f for f in df.columns if df.dtypes[f] != 'object']
nd = pd.melt(df, value_vars = num)
n1 = sns.FacetGrid (nd, col='variable', col_wrap=4, sharex=False, sharey = False)
n1 = n1.map(sns.distplot, 'value')
n1
<seaborn.axisgrid.FacetGrid at 0x2020e5db510>
Distplot Insights: -
Above is the distrution bar graphs to confirm about the statistics of the data about the skewness, the above results are:
20 columns are positive skewed, 12 columns are negative skewed and 3 columns are normal skewed.
1 column is added here i.e Attrition which is our target variable ~ which is also +ve skewed. In that case we’ll need to log transform this variable so that it becomes normally distributed. A normally distributed (or close to normal) target variable helps in better modeling the relationship between target and independent variables
Why Distplot?
Skewness is demonstrated on a bell curve when data points are not distributed symmetrically to the left and right sides of the median on a bell curve. If the bell curve is shifted to the left or the right, it is said to be skewed.
We can observe that the bell curve is shifted to left we indicates positive skewness.As all the column are positively skewed we don’t need to do scaling.
Let’s proceed and check the distribution of the target variable.
#+ve skewed
df['Attrition'].skew()
1.8443661240010911
The target variable is positively skewed.A normally distributed (or close to normal) target variable helps in better modeling the relationship between target and independent variables.
3. Heatmap
A heatmap (or heat map) is a graphical representation of data where values are depicted by color.Heatmaps make it easy to visualize complex data and understand it at a glance
Correlation — A positive correlation is a relationship between two variables in which both variables move in the same direction. Therefore, when one variable increases as the other variable increases, or one variable decreases while the other decreases.
Correlation can have a value:
1 is a perfect positive correlation
0 is no correlation (the values don’t seem linked at all)
-1 is a perfect negative correlation
#correlation plot
sns.set(rc = {'figure.figsize':(30,20)})
corr = df.corr().abs()
sns.heatmap(corr,annot=True)
plt.show()
plt.figure(figsize=(20,20))
corr=df[df.columns[1:]].corr()
mask = np.triu(np.ones_like(corr, dtype=bool))
sns.heatmap(df[df.columns[1:]].corr(), mask=mask, cmap='coolwarm', vmax=.3, center=0,
square=True, linewidths=.5,annot=True)
plt.show()
Notice the last column from right side of this map. We can see the correlation of all variables against Attrition. As you can see, some variables seem to be strongly correlated with the target variable. Here, a numeric correlation score will help us understand the graph better.
print (corr['Attrition'].sort_values(ascending=False)[:15], '\n') #top 15 values
print ('-------------------------------------')
print (corr['Attrition'].sort_values(ascending=False)[-5:]) #last 5 values
print ('-------------------------------------')
Attrition 1.000000
OverTime 0.246118
MaritalStatus 0.162070
DistanceFromHome 0.077924
JobRole 0.067151
Department 0.063991
NumCompaniesWorked 0.043494
Gender 0.029453
EducationField 0.026846
MonthlyRate 0.015170
PerformanceRating 0.002889
BusinessTravel 0.000074
HourlyRate -0.006846
EmployeeNumber -0.010577
PercentSalaryHike -0.013478
Name: Attrition, dtype: float64
-------------------------------------
JobLevel -0.169105
TotalWorkingYears -0.171063
EmployeeCount NaN
Over18 NaN
StandardHours NaN
Name: Attrition, dtype: float64
-------------------------------------
Here we see that the OverTime feature is 24% correlated with the target variable.
corr
Heatmap insights: -
As we know, it is recommended to avoid correlated features in your dataset. Indeed, a group of highly correlated features will not bring additional information (or just very few), but will increase the complexity of the algorithm, hence increasing the risk of errors.
Why Heatmap?
Heatmaps are used to show relationships between two variables, one plotted on each axis. By observing how cell colors change across each axis, you can observe if there are any patterns in value for one or both variables.
4. Boxplot
A boxplot is a standardized way of displaying the distribution of data based on a five number summary (“minimum”, first quartile [Q1], median, third quartile [Q3] and “maximum”).
Basically, to find the outlier in a dataset/column.
features=int64_lst_col
features.remove('Attrition')
sns.boxplot(data=df)
<Axes: >
The dark points are known as Outliers. Outliers are those data points that are significantly different from the rest of the dataset. They are often abnormal observations that skew the data distribution, and arise due to inconsistent data entry, or erroneous observations.
Boxplot Insights: -
Sometimes outliers may be an error in the data and should be removed. In this case these points are correct readings yet they are different from the other points that they appear to be incorrect.
The best way to decide wether to remove them or not is to train models with and without these data points and compare their validation accuracy.
So we will keep it unchanged as it won’t affect our model.
Here, we can see that most of the variables possess outlier values. It would take us days if we start treating these outlier values one by one. Hence, for now we’ll leave them as is and let our algorithm deal with them. As we know, tree-based algorithms are usually robust to outliers.
Why Boxplot?
Box plots are used to show distributions of numeric data values, especially when you want to compare them between multiple groups. They are built to provide high-level information at a glance, offering general information about a group of data’s symmetry, skew, variance, and outliers.
Attrition Prediction uses a statistical model trained on the leaving behaviour across millions of survey data points in the Peakon database. In determining the attrition risk per segment, the model also uses 5 key factors and follows this order:
The model calculates attrition risk per employee.
The model then uses employee-level attrition risk to calculate the average attrition risk for each segment, as well as for the whole company.
The model compares the average risk of each segment to the average risk of the company, to assign an attrition risk level. Example: Attrition risk in the Marketing segment is in the top 10% of your organization.
In the next step we will divide our cleaned data into training data and testing data.
Step 2: Data Preparation
Goal:-
Tasks we are going to in this step:
Now we will spearate the target variable and feature columns in two different dataframe and will check the shape of the dataset for validation purpose.
Split dataset into train and test dataset.
Scaling on train dataset.
1. Now we spearate the target variable and feature columns in two different dataframe and will check the shape of the dataset for validation purpose.
# Separate target and feature column in X and y variable
target = 'Attrition'
# X will be the features
X = df.drop(target,axis=1)
#y will be the target variable
y = df[target]
y have target variable and X have all other variable.
Here in employee attrition prediction, Attrition is the target variable.
X.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1470 entries, 0 to 1469
Data columns (total 34 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Age 1470 non-null int64
1 BusinessTravel 1470 non-null int32
2 DailyRate 1470 non-null int64
3 Department 1470 non-null int32
4 DistanceFromHome 1470 non-null int64
5 Education 1470 non-null int64
6 EducationField 1470 non-null int32
7 EmployeeCount 1470 non-null int64
8 EmployeeNumber 1470 non-null int64
9 EnvironmentSatisfaction 1470 non-null int64
10 Gender 1470 non-null int32
11 HourlyRate 1470 non-null int64
12 JobInvolvement 1470 non-null int64
13 JobLevel 1470 non-null int64
14 JobRole 1470 non-null int32
15 JobSatisfaction 1470 non-null int64
16 MaritalStatus 1470 non-null int32
17 MonthlyIncome 1470 non-null int64
18 MonthlyRate 1470 non-null int64
19 NumCompaniesWorked 1470 non-null int64
20 Over18 1470 non-null int32
21 OverTime 1470 non-null int32
22 PercentSalaryHike 1470 non-null int64
23 PerformanceRating 1470 non-null int64
24 RelationshipSatisfaction 1470 non-null int64
25 StandardHours 1470 non-null int64
26 StockOptionLevel 1470 non-null int64
27 TotalWorkingYears 1470 non-null int64
28 TrainingTimesLastYear 1470 non-null int64
29 WorkLifeBalance 1470 non-null int64
30 YearsAtCompany 1470 non-null int64
31 YearsInCurrentRole 1470 non-null int64
32 YearsSinceLastPromotion 1470 non-null int64
33 YearsWithCurrManager 1470 non-null int64
dtypes: int32(8), int64(26)
memory usage: 344.7 KB
y
0 1
1 0
2 1
3 0
4 0
..
1465 0
1466 0
1467 0
1468 0
1469 0
Name: Attrition, Length: 1470, dtype: int32
# Check the shape of X and y variable
X.shape, y.shape
((1470, 34), (1470,))
# Reshape the y variable
y = y.values.reshape(-1,1)
# Again check the shape of X and y variable
X.shape, y.shape
((1470, 34), (1470, 1))
2. Spliting the dataset in training and testing data.
Here we are spliting our dataset into 80/20 percentage where 80% dataset goes into the training part and 20% goes into testing part.
# Split the X and y into X_train, X_test, y_train, y_test variables with 80-20% split.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Check shape of the splitted variables
X_train.shape, X_test.shape, y_train.shape, y_test.shape
((1176, 34), (294, 34), (1176, 1), (294, 1))
Insights: -
Train test split technique is used to estimate the performance of machine learning algorithms which are used to make predictions on data not used to train the model.It is a fast and easy procedure to perform, the results of which allow you to compare the performance of machine learning algorithms for your predictive modeling problem. Although simple to use and interpret, there are times when the procedure should not be used, such as when you have a small dataset and situations where additional configuration is required, such as when it is used for classification and the dataset is not balanced.
In the next step we will train our model on the basis of our training and testing data.
Step 3: Model Training
Goal:
In this step we are going to train our dataset on different classification algorithms. As we know that our target variable is in discrete format so we have to apply classification algorithm. Target variable is a category like filtering.In our dataset we have the outcome variable or Dependent variable i.e Y having only two set of values, either M (Malign) or B(Benign). So we will use Classification algorithm**
Algorithms we are going to use in this step
Logistic Regression
KNearest Neighbor
Random Forest Classification
K-fold cross validation is a procedure used to estimate the skill of the model on new data. There are common tactics that you can use to select the value of k for your dataset. There are commonly used variations on cross-validation, such as stratified and repeated, that are available in scikit-learn
# Define kfold with 10 split
cv = KFold(n_splits=10, shuffle=True, random_state=42)
The goal of cross-validation is to test the model’s ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias and to give an insight on how the model will generalize to an independent dataset (i.e., an unknown dataset, for instance from a real problem).
1. Logistic Regression
Logistic regression is one of the most popular Machine Learning algorithms, which comes under the Supervised Learning technique. It is used for predicting the categorical dependent variable using a given set of independent variables.
Logistic regression predicts the output of a categorical dependent variable. Therefore the outcome must be a categorical or discrete value. It can be either Yes or No, 0 or 1, true or False, etc. but instead of giving the exact value as 0 and 1, it gives the probabilistic values which lie between 0 and 1.
Train set cross-validation
#Using Logistic Regression Algorithm to the Training Set
from sklearn.linear_model import LogisticRegression
log_R = LogisticRegression() #Object Creation
log_R.fit(X_train, y_train)
LogisticRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
LogisticRegression
LogisticRegression()
#Accuracy check of trainig data
#Get R2 score
log_R.score(X_train, y_train)
0.8324829931972789
#Accuracy of test data
log_R.score(X_test, y_test)
0.8673469387755102
# Getting kfold values
lg_scores = -1 * cross_val_score(log_R,
X_train,
y_train,
cv=cv,
scoring='neg_root_mean_squared_error')
lg_scores
array([0.40126917, 0.46940279, 0.30532006, 0.42186029, 0.36822985,
0.44149208, 0.43362909, 0.41344912, 0.40298035, 0.42365927])
# Mean of the train kfold scores
lg_score_train = np.mean(lg_scores)
lg_score_train
0.4081292066635018
Prediction
Now we will perform prediction on the dataset using Logistic Regression.
# Predict the values on X_test_scaled dataset
y_predicted = log_R.predict(X_test)
Various parameters are calculated for analysing the predictions.
- Confusion Matrix 2)Classification Report 3)Accuracy Score 4)Precision Score 5)Recall Score 6)F1 Score
Confusion Matrix
A confusion matrix presents a table layout of the different outcomes of the prediction and results of a classification problem and helps visualize its outcomes. It plots a table of all the predicted and actual values of a classifier.
This diagram helps in understanding the concept of confusion matrix.
# Constructing the confusion matrix.
from sklearn.metrics import confusion_matrix
#confusion matrix btw y_test and y_predicted
cm = confusion_matrix(y_test,y_predicted)
#We are creating Confusion Matrix on heatmap to have better understanding
# sns.heatmap(cm,cmap = 'Red') ~ to check for available colors
sns.set(rc = {'figure.figsize':(5,5)})
sns.heatmap(cm,cmap = 'icefire_r', annot = True, cbar=False, linecolor='Black', linewidth = 2)
plt.title("Confusion matrix")
plt.xticks(np.arange(2)+.5,['Non-Maligant', 'Maligant'])
plt.yticks(np.arange(2)+.5,['Non-Maligant', 'Maligant'])
plt.xlabel('Predicted CLass')
plt.ylabel('True Class')
Text(29.75, 0.5, 'True Class')
sns.heatmap(cm/np.sum(cm), annot=True,
fmt='.2%', cmap='Blues', cbar = False)
<Axes: >
Evaluating all kinds of evaluating parameters.
Classification Report :
A classification report is a performance evaluation metric in machine learning. It is used to show the precision, recall, F1 Score, and support of your trained classification model.
F1_score :
The F1 score is a machine learning metric that can be used in classification models.
Precision_score :
The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. The best value is 1 and the worst value is 0.
Recall_score :
Recall score is used to measure the model performance in terms of measuring the count of true positives in a correct manner out of all the actual positive values. Precision-Recall score is a useful measure of success of prediction when the classes are very imbalanced.
# Evaluating the classifier
# printing every score of the classifier
# scoring in anything
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score, accuracy_score, precision_score,recall_score
from sklearn.metrics import confusion_matrix
print("The model used is Logistic Regression")
l_acc = accuracy_score(y_test, y_predicted)*100
print("\nThe accuracy is: {}".format(l_acc))
prec = precision_score(y_test, y_predicted)
print("The precision is: {}".format(prec))
rec = recall_score(y_test, y_predicted)
print("The recall is: {}".format(rec))
f1 = f1_score(y_test, y_predicted)
print("The F1-Score is: {}".format(f1))
c1 = classification_report(y_test, y_predicted)
print("Classification Report is:")
print()
print(c1)
The model used is Logistic Regression
The accuracy is: 86.73469387755102
The precision is: 0.0
The recall is: 0.0
The F1-Score is: 0.0
Classification Report is:
precision recall f1-score support
0 0.87 1.00 0.93 255
1 0.00 0.00 0.00 39
accuracy 0.87 294
macro avg 0.43 0.50 0.46 294
weighted avg 0.75 0.87 0.81 294
2. K Nearest Neighbour
K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on Supervised Learning technique. K-NN algorithm assumes the similarity between the new case/data and available cases and put the new case into the category that is most similar to the available categories. K-NN algorithm stores all the available data and classifies a new data point based on the similarity. This means when new data appears then it can be easily classified into a well suite category by using K- NN algorithm
#Using KNeighborsClassifier Method of neighbors class to use Nearest Neighbor algorithm
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier()
classifier.fit(X_train, y_train)
KNeighborsClassifier()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
KNeighborsClassifier
KNeighborsClassifier()
#Get kfold values
Nn_scores = -1 * cross_val_score(classifier,
X_train,
y_train,
cv=cv,
scoring='neg_root_mean_squared_error')
Nn_scores
array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan])
# Mean of the train kfold scores
Nn_score_train = np.mean(Nn_scores)
Nn_score_train
nan
Prediction
Now we will perform prediction on the dataset using K Nearest Neighbour.
# Predict the values on X_test_scaled dataset
y_predicted = classifier.predict(X_test.values)
# Constructing the confusion matrix.
from sklearn.metrics import confusion_matrix
#Confusion matrix btw y_test and y_predicted
cm = confusion_matrix(y_test,y_predicted)
#We are drawing cm on heatmap to have better understanding
# sns.heatmap(cm,cmap = 'Red') ~ to check for available colors
sns.heatmap(cm,cmap = 'icefire_r', annot = True, fmt= 'd', cbar=False, linecolor='Black', linewidth = 2)
plt.title("Confusion matrix")
plt.xlabel('Predicted CLass')
plt.ylabel('True Class')
Text(29.75, 0.5, 'True Class')
sns.heatmap(cm/np.sum(cm), annot=True,
fmt='.2%', cmap='Blues', cbar = False)
<Axes: >
Evaluating all kinds of evaluating parameters.
# Evaluating the classifier
# printing every score of the classifier
# scoring in anything
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score, accuracy_score, precision_score,recall_score
from sklearn.metrics import confusion_matrix
print("The model used is KNeighbors Classifier")
k_acc = accuracy_score(y_test, y_predicted)*100
print("\nThe accuracy is: {}".format(k_acc))
prec = precision_score(y_test, y_predicted)
print("The precision is: {}".format(prec))
rec = recall_score(y_test, y_predicted)
print("The recall is: {}".format(rec))
f1 = f1_score(y_test, y_predicted)
print("The F1-Score is: {}".format(f1))
c1 = classification_report(y_test, y_predicted)
print("Classification Report is:")
print()
print(c1)
The model used is KNeighbors Classifier
The accuracy is: 85.37414965986395
The precision is: 0.35714285714285715
The recall is: 0.1282051282051282
The F1-Score is: 0.18867924528301885
Classification Report is:
precision recall f1-score support
0 0.88 0.96 0.92 255
1 0.36 0.13 0.19 39
accuracy 0.85 294
macro avg 0.62 0.55 0.55 294
weighted avg 0.81 0.85 0.82 294
3. Random Forest Classifier
Random Forest is a powerful and versatile supervised machine learning algorithm that grows and combines multiple decision trees to create a “forest.” It can be used for both classification and regression problems in R and Python.
Random Forest and Decision Tree Algorithm are considered best for the data that has outliers.
#Using RandomForestClassifier method of ensemble class to use Random Forest Classification algorithm
from sklearn.ensemble import RandomForestClassifier
#clas = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)
clas = RandomForestClassifier()
clas.fit(X_train, y_train)
RandomForestClassifier()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
RandomForestClassifier
RandomForestClassifier()
#Accuracy check of trainig data
#Get R2 score
clas.score(X_train, y_train)
1.0
#Accuracy of test data
clas.score(X_test, y_test)
0.8775510204081632
# Get kfold values
Dta_scores = -1 * cross_val_score(clas,
X_train,
y_train,
cv=cv,
scoring='neg_root_mean_squared_error')
Dta_scores
array([0.35653702, 0.45098762, 0.30532006, 0.39056673, 0.36822985,
0.39056673, 0.43362909, 0.38118125, 0.41344912, 0.35805744])
# Mean of the train kfold scores
Dta_score_train = np.mean(Dta_scores)
Dta_score_train
0.3848524898801785
Prediction
Now we will perform prediction on the dataset using Random Forest Classifier.
# predict the values on X_test_scaled dataset
y_predicted = clas.predict(X_test)
# Constructing the confusion matrix.
from sklearn.metrics import confusion_matrix
#confusion matrix btw y_test and y_predicted
cm = confusion_matrix(y_test,y_predicted)
#We are drawing cm on heatmap to have better understanding
# sns.heatmap(cm,cmap = 'Red') ~ to check for available colors
sns.heatmap(cm,cmap = 'icefire_r', annot = True, fmt= 'd', cbar=False, linecolor='Black', linewidth = 2)
plt.title("Confusion matrix")
plt.xticks(np.arange(2)+.5,['Non-Maligant', 'Maligant'])
plt.yticks(np.arange(2)+.5,['Non=Maligant', 'Maligant'])
plt.xlabel('Predicted CLass')
plt.ylabel('True Class')
Text(29.75, 0.5, 'True Class')
sns.heatmap(cm/np.sum(cm), annot=True,
fmt='.2%', cmap='Blues', cbar = False)
<Axes: >
Evaluating all kinds of evaluating parameters.
# Evaluating the classifier
# printing every score of the classifier
# scoring in anything
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score, accuracy_score, precision_score,recall_score
from sklearn.metrics import confusion_matrix
print("The model used is Random Forest Classifier")
r_acc = accuracy_score(y_test, y_predicted)*100
print("\nThe accuracy is {}".format(r_acc))
prec = precision_score(y_test, y_predicted)
print("The precision is {}".format(prec))
rec = recall_score(y_test, y_predicted)
print("The recall is {}".format(rec))
f1 = f1_score(y_test, y_predicted)
print("The F1-Score is {}".format(f1))
c1 = classification_report(y_test, y_predicted)
print("Classification Report is:")
print()
print(c1)
The model used is Random Forest Classifier
The accuracy is 87.75510204081633
The precision is 0.8
The recall is 0.10256410256410256
The F1-Score is 0.18181818181818182
Classification Report is:
precision recall f1-score support
0 0.88 1.00 0.93 255
1 0.80 0.10 0.18 39
accuracy 0.88 294
macro avg 0.84 0.55 0.56 294
weighted avg 0.87 0.88 0.83 294
Insight: -
cal_metric=pd.DataFrame([l_acc,k_acc,r_acc],columns=["Score in percentage"])
cal_metric.index=['Logistic Regression',
'K-nearest Neighbours',
'Random Forest']
cal_metric
As you can see with our Random Forest Model(0.8707 or 87.07%)
So we gonna save our model with Random Forest Algorithm
Step 4: Save Model
Goal:- In this step we are going to save our model in pickel format file.
import pickle
pickle.dump(clas , open('Employee_Attrition_lo.pkl', 'wb'))
pickle.dump(clas , open('Employee_Attrition_kn.pkl', 'wb'))
pickle.dump(clas , open('Employee_Attrition_ra.pkl', 'wb'))
import pickle
def model_prediction(features):
pickled_model = pickle.load(open('Employee_Attrition_ra.pkl', 'rb'))
TravelInsurance = str(pickled_model.predict(features)[0])
if TravelInsurance=='1':
TravelInsurance='Yes'
else:
TravelInsurance='No'
return str(f'The Employee Attrition is {TravelInsurance}')
We can test our model by giving our own parameters or features to predict.
Age = 28
BusinessTravel = 2
DailyRate = 866
Department = 2
DistanceFromHome = 5
Education = 3
EducationField = 3
EmployeeCount = 1
EmployeeNumber = 1469
EnvironmentSatisfaction = 4
Gender = 1
HourlyRate = 84
JobInvolvement = 3
JobLevel = 2
JobRole = 7
JobSatisfaction = 1
MaritalStatus = 2
MonthlyIncome = 8463
MonthlyRate = 23490
NumCompaniesWorked = 0
Over18 = 0
OverTime = 0
PercentSalaryHike = 18
PerformanceRating = 3
RelationshipSatisfaction = 4
StandardHours = 80
StockOptionLevel = 0
TotalWorkingYears = 6
TrainingTimesLastYear = 4
WorkLifeBalance = 3
YearsAtCompany = 5
YearsInCurrentRole = 4
YearsSinceLastPromotion = 1
YearsWithCurrManager = 3
model_prediction([[Age,BusinessTravel,DailyRate,Department,DistanceFromHome,Education,EducationField,EmployeeCount,EmployeeNumber,EnvironmentSatisfaction,Gender,HourlyRate,JobInvolvement,JobLevel,JobRole,JobSatisfaction,MaritalStatus,MonthlyIncome,MonthlyRate,NumCompaniesWorked,Over18,OverTime,PercentSalaryHike,PerformanceRating,RelationshipSatisfaction,StandardHours,StockOptionLevel,TotalWorkingYears,TrainingTimesLastYear,WorkLifeBalance,YearsAtCompany,YearsInCurrentRole,YearsSinceLastPromotion,YearsWithCurrManager]])
'The Employee Attrition is No'
1 = Yes, 0 = No
Conclusion
After observing the problem statement we have build an efficient model to the electricity price. The above model helps in predicting about employee attrition. The accuracy for the prediction is 87.07%.
Checkout whole project codehere(github repo).
🚀 Unlock Your Dream Job with HiDevs Community!
🔍 Seeking the perfect job?HiDevs Community is your gateway to career success in the tech industry. Explore free expert courses, job-seeking support, and career transformation tips.
💼 We offer an upskill program in Gen AI, Data Science, Machine Learning, and assist startups in adopting Gen AI at minimal development costs.
🆓 Best of all, everything we offer is completely free! We are dedicated to helping society.
Book free of cost 1:1 mentorship on any topic of your choice —topmate
✨ We dedicate over 30 minutes to each applicant’s resume, LinkedIn profile, mock interview, and upskill program. If you’d like our guidance, check out our services here
💡 Join us now, and turbocharge your career!
Deepak Chawla LinkedIn
Vijendra Singh LinkedIn
Yajendra Prajapati LinkedIn
YouTube Channel
Instagram Page
HiDevs LinkedIn
Project Youtube Link