Skip to content

Commit

Permalink
Merge branch 'master' into patch-5
Browse files Browse the repository at this point in the history
  • Loading branch information
HarryVasanth authored Oct 30, 2024
2 parents f0ca05e + 8249886 commit 867db45
Show file tree
Hide file tree
Showing 9 changed files with 200 additions and 15 deletions.
8 changes: 5 additions & 3 deletions subjects/ai/emotions-detector/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Emotion detector
## Emotions detector

### Overview

Expand Down Expand Up @@ -67,7 +67,9 @@ Your goal is to implement a program that takes as input a video stream that cont
This dataset was provided for this past [Kaggle challenge](https://www.kaggle.com/competitions/challenges-in-representation-learning-facial-expression-recognition-challenge/overview).
It is possible to find more information about on the challenge page. Train a CNN on the dataset `train.csv`. Here is an [example of architecture](https://www.quora.com/What-is-the-VGG-neural-network) you can implement.
**The CNN has to perform more than 60% on the test set**. You can use the `test_with_emotions.csv` file for this. You will see that the CNNs take a lot of time to train.
You don't want to overfit the neural network. I strongly suggest to use early stopping, callbacks and to monitor the training using the `TensorBoard` 'note: Integrating TensorBoard is not optional'.
You don't want to overfit the neural network. I strongly suggest to use early stopping, callbacks and to monitor the training using the `TensorBoard`.

> Note: Integrating TensorBoard is mandatory.
You have to save the trained model in `final_emotion_model.keras` and to explain the chosen architecture in `final_emotion_model_arch.txt`. Use `model.summary())` to print the architecture.
It is also expected that you explain the iterations and how you end up choosing your final architecture. Save a screenshot of the `TensorBoard` while the model's training in `tensorboard.png` and save a plot with the learning curves showing the model training and stopping BEFORE the model starts overfitting in `learning_curves.png`.
Expand Down Expand Up @@ -160,7 +162,7 @@ Preprocessing ...

### Tips

Balance technical prowess with psychological insight: as you fine-tune your CNN and optimize your video processing, remember that understanding the nuances of human facial expressions is key to creating a truly effective emotion detection system.
Balance technical prowess with psychological insight: as you fine-tune your CNN and optimize your video processing, remember that understanding the nuances of human facial expressions is key to creating a truly effective emotion detection system. Good luck!

### Resources

Expand Down
182 changes: 182 additions & 0 deletions subjects/ai/emotions-detector/audit/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,185 @@
#### Emotions detector

##### Preliminary

###### Does the structure of the project is equivalent to the one described in the subject `Delivery` section?

###### Does the README file summarize how to run the code and explain the global approach?

###### Does the environment contain all libraries used and their versions that are necessary to run the code?

###### Do the text files explain the chosen architectures?

#### CNN emotion classifier

###### Is the model trained only with the training set?

###### Is the accuracy on the test set higher than 60%?

###### Do the learning curves prove that the model is not overfitting?

###### Has the training been stopped early enough to avoid the overfitting?

###### Does the screenshot show the usage of the `TensorBoard` to monitor the training?

###### Does the text document explain why the architecture was chosen, and what were the previous iterations?

###### Does the following command `python ./scripts/predict.py` run without any error and returns an accuracy greater than 60%?

```prompt
python ./scripts/predict.py
Accuracy on test set: 62%
```

#### Face detection on the video stream

###### Does the preprocessing pipeline take as input the webcam video stream of minimum 20 sec and save in a separate folder at least 20 preprocessed\* images?

###### Do all images contain a face?

###### Are all images reshaped and centered on the face?

###### Is the algorithm that detects the face imported via cv2?

###### Is the image converted to 48 x 48 grayscale pixels' image?

###### If there's an issue related to the webcam, does the code take as input a video recorded video stream?

###### Does the following command `python ./scripts/predict_live_stream.py` run without any error and return the following?

```prompt
python ./scripts/predict_live_stream.py
Reading video stream ...
Preprocessing ...
11:11:11s : Happy , 73%
Preprocessing ...
11:11:12s : Happy , 93%
Preprocessing ...
11:11:13s : Surprise , 71%
Preprocessing ...
11:11:14s : Neutral , 82%
...
Preprocessing ...
11:13:29s : Happy , 63%
```

#### Hack the CNN - guidelines:

The neural network trains by updating its weights given the training error. If an image is misclassified the neural network changes its weight to classify it correctly. The trick is to keep the neural network's weights unchanged and to modify the input pixels in order to force the neural network to predict the wanted class.
This part is validated if:

##### Choose an image from the database that gives more than 90% probability of `Happy`

###### Does the neural network modify the input pixels to predict Sad?

###### Can you recognize easily the chosen image? The modified image is SLIGHTLY changed. It means that you recognize very easily the original image.

Here are three resources that detail similar approaches:

- https://github.com/XC-Li/Facial_Expression_Recognition/tree/master/Code/RAFDB
- https://github.com/karansjc1/emotion-detection/tree/master/with%20flask
- https://www.kaggle.com/drbeanesp21/aliaj-final-facial-expression-recognition (simplified)

#### Emotion detector

##### Preliminary

###### Does the structure of the project is equivalent to the one described in the subject `Delivery` section?

###### Does the README file summarize how to run the code and explain the global approach?

###### Does the environment contain all libraries used and their versions that are necessary to run the code?

###### Do the text files explain the chosen architectures?

#### CNN emotion classifier

###### Is the model trained only the training set?

###### Is the accuracy on the test set higher than 60%?

###### Do the learning curves prove that the model is not overfitting?

###### Has the training been stopped early enough to avoid the overfitting?

###### Does the screenshot show the usage of the `TensorBoard` to monitor the training?

###### Does the text document explain why the architecture was chosen, and what were the previous iterations?

###### Does the following command `python ./scripts/predict.py` run without any error and returns an accuracy greater than 60%?

```prompt
python ./scripts/predict.py
Accuracy on test set: 62%
```

#### Face detection on the video stream

###### Does the preprocessing pipeline take as input the webcam video stream of minimum 20 sec and save in a separate folder at least 20 preprocessed\* images?

###### Do all images contain a face?

###### Are all images reshaped and centered on the face?

###### Is the algorithm that detects the face imported via cv2?

###### Is the image converted to 48 x 48 grayscale pixels' image?

###### If there's an issue related to the webcam, does the code take as input a video recorded video stream?

###### Does the following command `python ./scripts/predict_live_stream.py` run without any error and return the following?

```prompt
python ./scripts/predict_live_stream.py
Reading video stream ...
Preprocessing ...
11:11:11s : Happy , 73%
Preprocessing ...
11:11:12s : Happy , 93%
Preprocessing ...
11:11:13s : Surprise , 71%
Preprocessing ...
11:11:14s : Neutral , 82%
...
Preprocessing ...
11:13:29s : Happy , 63%
```

#### Hack the CNN - guidelines:

The neural network trains by updating its weights given the training error. If an image is misclassified the neural network changes its weight to classify it correctly. The trick is to keep the neural network's weights unchanged and to modify the input pixels in order to force the neural network to predict the wanted class.
This part is validated if:

##### Choose an image from the database that gives more than 90% probability of `Happy`

###### Does the neural network modify the input pixels to predict Sad?

###### Can you recognize easily the chosen image? The modified image is SLIGHTLY changed. It means that you recognize very easily the original image.

Here are three resources that detail similar approaches:

- https://github.com/XC-Li/Facial_Expression_Recognition/tree/master/Code/RAFDB
- https://github.com/karansjc1/emotion-detection/tree/master/with%20flask
- https://www.kaggle.com/drbeanesp21/aliaj-final-facial-expression-recognition (simplified)

#### Emotion detector

##### Preliminary
Expand Down
2 changes: 1 addition & 1 deletion subjects/forum/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ For the forum project you must use Docker. You can read about docker basics in t
- [bcrypt](https://pkg.go.dev/golang.org/x/crypto/bcrypt)
- [UUID](https://github.com/gofrs/uuid)

> You must not use use any frontend libraries or frameworks like React, Angular, Vue etc.
> You must not use any frontend libraries or frameworks like React, Angular, Vue etc.
This project will help you learn about:

Expand Down
2 changes: 1 addition & 1 deletion subjects/get-them-all-dom/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
You've been attributed the task to find the main architect of the Tower of Pisa before he achieves his plans, avoiding us nowadays all those lame pictures of people pretending to stop it from falling.

You arrive at the architects' chamber to find him, but all you have in front of you is a bunch of unknown people.
Step by step, with the little information you have, gather information and figure out by elimination who he is.
Step by step, with the limited details you have, gather information and figure out by elimination who he is.

Launch the provided HTML file in the browser to begin your investigation.<br/>
On top of the webpage, each of the four buttons fires a function:
Expand Down
2 changes: 1 addition & 1 deletion subjects/get-them-all/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
You've been attributed the task to find the main architect of the Tower of Pisa before he achieves his plans, avoiding us nowadays all those lame pictures of people pretending to stop it from falling.

You arrive at the architects' chamber to find him, but all you have in front of you is a bunch of unknown people.
Step by step, with the little information you have, gather information and figure out by elimination who he is.
Step by step, with the limited details you have, gather information and figure out by elimination who he is.

Launch the provided HTML file in the browser to begin your investigation.<br/>
On top of the webpage, each of the four buttons fires a function:
Expand Down
2 changes: 1 addition & 1 deletion subjects/its-a-match/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

### Instructions

Have you been pondering over the etymology of `grep`?
Have you ever pondered the etymology of `grep`?

Create 4 regular expression variables:

Expand Down
17 changes: 9 additions & 8 deletions subjects/sortable/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,20 @@ We've found _confidential_ information about those superheroes.
You must write all of the code from scratch. You are not allowed to rely on any frameworks or libraries like React, Vue, Svelte etc.

#### Fetching the data

In order to get the information, you should use `fetch`.
When you use `fetch` in JS, it always returns a `Promise`. We will look more deeply into that later on. For now, tak a look at this:
When you use `fetch` in JS, it always returns a `Promise`. We will look more deeply into that later on. For now, take a look at this:

```js
// This function is called only after the data has been fetched, and parsed.
const loadData = heroes => {
console.log(heroes)
}
const loadData = (heroes) => {
console.log(heroes);
};

// Request the file with fetch, the data will downloaded to your browser cache.
fetch('https://rawcdn.githack.com/akabab/superhero-api/0.2.0/api/all.json')
// Request the file with fetch, and the data will be downloaded to your browser cache.
fetch("https://rawcdn.githack.com/akabab/superhero-api/0.2.0/api/all.json")
.then((response) => response.json()) // parse the response from JSON
.then(loadData) // .then will call the `loadData` function with the JSON value.
.then(loadData); // .then will call the `loadData` function with the JSON value.
```

#### Display
Expand Down Expand Up @@ -73,7 +74,7 @@ Additional features will be critical to your success. Here's a few which will gi
- `include`
- `exclude`
- `fuzzy`
- `equal`, `not equal`, `greater than` and `lesser than` for numbers (including weight and height).
- `equal`, `not equal`, `greater than` and `lesser than` for numbers (including weight and height).
- Detail view. Clicking a hero from the list will show all the details and large image.
- A slick design. Spend some time improving the look and feel by playing around with CSS. Have fun with it.
- Modify the URL with the search term, so that if you copy and paste the URL in a different tab, it will display the same column filters. If you have implemented detail view, the state of which hero is displayed should also form part of the URL.

0 comments on commit 867db45

Please sign in to comment.