codingboo's profile picture. Learning Data Science and Data Analytics!

Elena Chen

@codingboo

Learning Data Science and Data Analytics!

#Day15 of #DataAnalytics #Seaborns Place data in matrix form by .pivot_table() .heatmap to plot data in color-encoded matrices. annot=True for annotation of the values to be presented on the grid. cmap to change color variation VS .clustermap data grouped based on similarity

codingboo's tweet image. #Day15 of #DataAnalytics #Seaborns
Place data in matrix form by .pivot_table()

.heatmap to plot data in color-encoded matrices.
annot=True for annotation of the values to be presented on the grid.
cmap to change color variation 

VS .clustermap data grouped based on similarity
codingboo's tweet image. #Day15 of #DataAnalytics #Seaborns
Place data in matrix form by .pivot_table()

.heatmap to plot data in color-encoded matrices.
annot=True for annotation of the values to be presented on the grid.
cmap to change color variation 

VS .clustermap data grouped based on similarity
codingboo's tweet image. #Day15 of #DataAnalytics #Seaborns
Place data in matrix form by .pivot_table()

.heatmap to plot data in color-encoded matrices.
annot=True for annotation of the values to be presented on the grid.
cmap to change color variation 

VS .clustermap data grouped based on similarity

#Day14 of #DataAnalytics #Seaborns kdeplot - kernel density estimation. Idea is to replace each data point (represented by dashmark in rugplot) with a small Gaussian (Normal) distribution centered around that value, then summing the Gaussians for smooth estimate of the distributi

codingboo's tweet image. #Day14 of #DataAnalytics #Seaborns kdeplot - kernel density estimation. Idea is to replace each data point (represented by dashmark in rugplot) with a small Gaussian (Normal) distribution centered around that value, then summing the Gaussians for smooth estimate of the distributi
codingboo's tweet image. #Day14 of #DataAnalytics #Seaborns kdeplot - kernel density estimation. Idea is to replace each data point (represented by dashmark in rugplot) with a small Gaussian (Normal) distribution centered around that value, then summing the Gaussians for smooth estimate of the distributi

#Day11 of #DataAnalytics I'm struggling with #Matplotlib because my kernel keeps restarting/dying whenever I try to import matplotlib... this was the same problem I faced the previous time when I was learning this too...


#Day10 of #DataAnalytics Started #Matplotlib visualization tool for Python! View: matplotlib.org/2.0.2/gallery.… to see the whole list of figures that can be done + source code (eg statistical plots & scientific figures) import matplotlib.pyplot as plt %matplotlib inline plt.plot()

codingboo's tweet image. #Day10 of #DataAnalytics

Started #Matplotlib visualization tool for Python!
View: matplotlib.org/2.0.2/gallery.… to see the whole list of figures that can be done + source code (eg statistical plots & scientific figures)

import matplotlib.pyplot as plt  
%matplotlib inline

plt.plot()

#Day9 of #DataAnalytics Finished a last section of learning #Pandas, and did extracting data with: - str.contain(' ', case=False) to make it case-insensitive - .head(n) to get the first n rows, usually paired with .value_counts - len(df[’col2’].unique()) / df[’col2’].nunique()

codingboo's tweet image. #Day9 of #DataAnalytics 
Finished a last section of learning #Pandas, and did extracting data with:
- str.contain(' ', case=False) to make it case-insensitive
- .head(n) to get the first n rows, usually paired with .value_counts
- len(df[’col2’].unique()) / df[’col2’].nunique()
codingboo's tweet image. #Day9 of #DataAnalytics 
Finished a last section of learning #Pandas, and did extracting data with:
- str.contain(' ', case=False) to make it case-insensitive
- .head(n) to get the first n rows, usually paired with .value_counts
- len(df[’col2’].unique()) / df[’col2’].nunique()
codingboo's tweet image. #Day9 of #DataAnalytics 
Finished a last section of learning #Pandas, and did extracting data with:
- str.contain(' ', case=False) to make it case-insensitive
- .head(n) to get the first n rows, usually paired with .value_counts
- len(df[’col2’].unique()) / df[’col2’].nunique()
codingboo's tweet image. #Day9 of #DataAnalytics 
Finished a last section of learning #Pandas, and did extracting data with:
- str.contain(' ', case=False) to make it case-insensitive
- .head(n) to get the first n rows, usually paired with .value_counts
- len(df[’col2’].unique()) / df[’col2’].nunique()

Loading...

Something went wrong.


Something went wrong.