🤖
Machine Learning Workstation
  • Main page
  • Connection
    • Connection through VPN
  • Usage
    • Python/Conda
    • Edge Impulse
    • Common packages
    • Tmux (persistent sessions)
    • Sharing code base
    • GPU usage
  • Data transfer
    • To or from the host machine
    • To or from the Internet
  • Useful links
    • Linux
    • Python
    • Machine Learning
      • Classifiers
      • Natural Language Processing
      • Unsupervised Learning
      • Data Generation
      • Autoencoders
    • Evaluating Neural Networks
    • Recurrent Neural Network
  • Administration
    • Introduction
      • TLJH mamba/pip installations for ALL users
      • TLJH: IDLE-CULLER - Disabled TODO need a sane value
      • Allowing user to log in to your JupyterHub without server user name
    • Connect to the machine via SSH
      • Add SSH Keys
      • Port forwarding
      • X forwarding (running software with GUI)
      • Run Jupyter remotely
    • Managing users
    • System maintenance
    • Other tasks
Powered by GitBook
On this page

Was this helpful?

  1. Useful links

Evaluating Neural Networks

PreviousAutoencodersNextRecurrent Neural Network

Last updated 3 years ago

Was this helpful?

The most common ways of evaluating a neural network are the training loss, validation loss, training accuracy, and validation accuracy plots. These are essential steps for evaluating the robustness of your model but also the level of fit (e.g. overfit). Yellowbrick offers more intuitive visual diagnostic tools that extend the Scikit API. The main dependencies are: and .

What makes yellowbrick different from matplotlib or seaborn or plotly is that it's the only visulization that does model visulization, directly show you the performace of models and makes coding much easier and shorter.

scikit-learn
matplotlib
Evaluating NNets using YellowBrick
Confusion matrix with matplotlib and yellowbrick