Skip to content

Commit

Permalink
feat: Add the finals (#44)
Browse files Browse the repository at this point in the history
* feat: The finals implemented

* feat: Fix pytest dep until they fix pytest-dev/pytest#11662

* feat: Add test Dockerfile
  • Loading branch information
Flowtter authored Jan 28, 2024
1 parent 60d799f commit 5fcc780
Show file tree
Hide file tree
Showing 19 changed files with 477 additions and 49 deletions.
41 changes: 40 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ It uses a neural network to detect highlights in the video-game frames.\

# Supported games

Currently it supports **[Valorant](https://playvalorant.com/)**, **[Overwatch](https://playoverwatch.com/)** and **[CSGO2](https://www.counter-strike.net/cs2)**.
Currently it supports **[Valorant](https://playvalorant.com/)**, **[Overwatch](https://playoverwatch.com/)**, **[CSGO2](https://www.counter-strike.net/cs2)** and **[The Finals](https://www.reachthefinals.com/)**.

# Usage

Expand Down Expand Up @@ -122,6 +122,21 @@ Here are some settings that I found to work well for me:
}
```

#### The Finals

```json
{
"clip": {
"framerate": 8,
"second-before": 6,
"second-after": 0,
"second-between-kills": 3
},
"stretch": false,
"game": "thefinals"
}
```

## Run

You can now run the application with the run.[sh|bat] file.
Expand Down Expand Up @@ -176,6 +191,30 @@ The following effects are available to use:

In the result view, you can see the result of your montage.

# Q&A

### **Q:** Why are some games not using the neural network?

**A:** To detect highlights in a video-game, the neural-network searches for things that always happen in a highlight.\
For example, in Overwatch, a kill is symbolized by a red skull. So the neural-network will search for red skulls in the frames.\
Unfortunately, not all games have such things.\
The finals, for example, is a game where you don't have any symbol to represent a kill.\
So for those games, the neural-network is not used. Instead, we're using an OCR to detect the killfeed.\
The OCR is definitely not as efficient as the neural-network, slow, and depends on the quality of the video.\
But it's the best we can do for now.

### **Q:** Why are some games not supported?

**A:** The neural-network has simply not been trained for those games.\
If you want to add support for a game, you can train the neural-network yourself and then make a pull request.\
A tutorial is available [here](https://github.com/Flowtter/crispy/tree/master/crispy-api/dataset).

### **Q:** In CSGO2, I moved the UI, and the kills are not detected anymore. What can I do?

**A:** Unfortunately, there is nothing you can do.\
The neural-network is trained to detect kills in the default UI.\
I'm planning to add support for custom UI in the future, but this is definitely not a priority.

# Contributing

Every contribution is welcome.
Expand Down
14 changes: 14 additions & 0 deletions crispy-api/Dockerfile.test
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
FROM python:3.10-slim
WORKDIR /app
RUN apt-get -y update
RUN apt-get install ffmpeg -y

COPY requirements.txt .
COPY requirements-dev.txt .
RUN pip install -r requirements-dev.txt
COPY api api
COPY tests tests
COPY assets assets
COPY settings.json settings.json
COPY setup.cfg setup.cfg
CMD ["pytest"]
25 changes: 14 additions & 11 deletions crispy-api/api/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,24 +12,27 @@
from montydb import MontyClient, set_storage
from pydantic.json import ENCODERS_BY_TYPE

from api.config import ASSETS, DATABASE_PATH, DEBUG, FRAMERATE, GAME, MUSICS, VIDEOS
from api.config import (
ASSETS,
DATABASE_PATH,
DEBUG,
FRAMERATE,
GAME,
MUSICS,
USE_NETWORK,
VIDEOS,
)
from api.tools.AI.network import NeuralNetwork
from api.tools.enums import SupportedGames
from api.tools.filters import apply_filters # noqa
from api.tools.setup import handle_highlights, handle_musics

ENCODERS_BY_TYPE[ObjectId] = str

neural_network = NeuralNetwork(GAME)
NEURAL_NETWORK = None

if GAME == SupportedGames.OVERWATCH:
neural_network.load(os.path.join(ASSETS, "overwatch.npy"))
elif GAME == SupportedGames.VALORANT:
neural_network.load(os.path.join(ASSETS, "valorant.npy"))
elif GAME == SupportedGames.CSGO2:
neural_network.load(os.path.join(ASSETS, "csgo2.npy"))
else:
raise ValueError(f"game {GAME} not supported")
if USE_NETWORK:
NEURAL_NETWORK = NeuralNetwork(GAME)
NEURAL_NETWORK.load(os.path.join(ASSETS, GAME + ".npy"))


logging.getLogger("PIL").setLevel(logging.ERROR)
Expand Down
23 changes: 16 additions & 7 deletions crispy-api/api/config.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import json
import os

import easyocr
from starlette.config import Config

from api.tools.enums import SupportedGames
Expand Down Expand Up @@ -49,15 +50,23 @@
FRAMES_BEFORE = __clip.get("second-before", 0) * FRAMERATE
FRAMES_AFTER = __clip.get("second-after", 0) * FRAMERATE

__neural_network = __settings.get("neural-network")
if __neural_network is None:
raise KeyError("neural-network not found in settings.json")

CONFIDENCE = __neural_network.get("confidence", 0.6)

STRETCH = __settings.get("stretch", False)
GAME = __settings.get("game")
if GAME is None:
raise KeyError("game not found in settings.json")
if GAME.upper() not in [game.name for game in SupportedGames]:
raise ValueError(f"game {GAME} not supported")

USE_NETWORK = GAME not in [SupportedGames.THEFINALS]

__neural_network = __settings.get("neural-network")
if __neural_network is None and USE_NETWORK:
raise KeyError("neural-network not found in settings.json")

if __neural_network is not None:
CONFIDENCE = __neural_network.get("confidence", 0.6)
else:
CONFIDENCE = 0

STRETCH = __settings.get("stretch", False)

READER = easyocr.Reader(["en", "fr"], gpu=True, verbose=False)
98 changes: 89 additions & 9 deletions crispy-api/api/models/highlight.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,28 +22,34 @@
class Box:
def __init__(
self,
offset_x: int,
x: int,
y: int,
width: int,
height: int,
shift_x: int,
stretch: bool,
from_center: bool = True,
) -> None:
"""
:param offset_x: Offset in pixels from the center of the video to the left
:param x: Offset in pixels from the left of the video or from the center if use_offset is enabled
:param y: Offset in pixels from the top of the video
:param width: Width of the box in pixels
:param height: Height of the box in pixels
:param shift_x: Shift the box by a certain amount of pixels to the right
:param stretch: Stretch the box to fit the video
:param use_offset: If enabled, x will be from the center of the video, else it will be from the left (usef)
example:
If you want to create a box at 50 px from the center on x, but shifted by 20px to the right
you would do:
Box(50, 0, 100, 100, 20)
"""
half = 720 if stretch else 960
if from_center:
half = 720 if stretch else 960
self.x = half - x + shift_x
else:
self.x = x + shift_x

self.x = half - offset_x + shift_x
self.y = y
self.width = width
self.height = height
Expand Down Expand Up @@ -93,17 +99,21 @@ async def extract_images(
post_process: Callable,
coordinates: Box,
framerate: int = 4,
save_path: str = "images",
force_extract: bool = False,
) -> bool:
"""
Extracts images from a video at a given framerate
:param post_process: Function to apply to each image
:param coordinates: Coordinates of the box to extract
:param framerate: Framerate to extract the images
:param save_path: Path to save the images
"""
if self.images_path:
if self.images_path and not force_extract:
return False
images_path = os.path.join(self.directory, "images")
images_path = os.path.join(self.directory, save_path)

if not os.path.exists(images_path):
os.mkdir(images_path)
Expand All @@ -124,8 +134,9 @@ async def extract_images(

post_process(im).save(im_path)

self.update({"images_path": images_path})
self.save()
if save_path == "images":
self.update({"images_path": images_path})
self.save()

return True

Expand Down Expand Up @@ -220,6 +231,73 @@ def post_process(image: Image) -> Image:
post_process, Box(50, 925, 100, 100, 20, stretch), framerate=framerate
)

async def extract_the_finals_images(
self, framerate: int = 4, stretch: bool = False
) -> bool:
def is_color_close(
pixel: Tuple[int, int, int],
expected: Tuple[int, int, int],
threshold: int = 100,
) -> bool:
distance: int = (
sum((pixel[i] - expected[i]) ** 2 for i in range(len(pixel))) ** 0.5
)
return distance < threshold

def post_process_killfeed(image: Image) -> Image:
r, g, b = image.split()
for x in range(image.width):
for y in range(image.height):
if not is_color_close(
(r.getpixel((x, y)), g.getpixel((x, y)), b.getpixel((x, y))),
(12, 145, 201),
120,
):
r.putpixel((x, y), 0)
b.putpixel((x, y), 0)
g.putpixel((x, y), 0)

im = ImageOps.grayscale(Image.merge("RGB", (r, g, b)))

final = Image.new("RGB", (250, 115))
final.paste(im, (0, 0))
return final

killfeed_state = await self.extract_images(
post_process_killfeed,
Box(1500, 75, 250, 115, 0, stretch, from_center=False),
framerate=framerate,
)

def post_process(image: Image) -> Image:
r, g, b = image.split()
for x in range(image.width):
for y in range(image.height):
if not is_color_close(
(r.getpixel((x, y)), g.getpixel((x, y)), b.getpixel((x, y))),
(255, 255, 255),
):
r.putpixel((x, y), 0)
b.putpixel((x, y), 0)
g.putpixel((x, y), 0)

im = ImageOps.grayscale(Image.merge("RGB", (r, g, b)))

final = Image.new("RGB", (200, 120))
final.paste(im, (0, 0))
return final

return (
await self.extract_images(
post_process,
Box(20, 800, 200, 120, 0, stretch, from_center=False),
framerate=framerate,
save_path="usernames",
force_extract=True,
)
and killfeed_state
)

async def extract_images_from_game(
self, game: SupportedGames, framerate: int = 4, stretch: bool = False
) -> bool:
Expand All @@ -229,8 +307,10 @@ async def extract_images_from_game(
return await self.extract_valorant_images(framerate, stretch)
elif game == SupportedGames.CSGO2:
return await self.extract_csgo2_images(framerate, stretch)
elif game == SupportedGames.THEFINALS:
return await self.extract_the_finals_images(framerate, stretch)
else:
raise NotImplementedError
raise NotImplementedError(f"game {game} not supported")

def recompile(self) -> bool:
from api.tools.utils import sanitize_dict
Expand Down
4 changes: 2 additions & 2 deletions crispy-api/api/routes/highlight.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from fastapi.responses import FileResponse
from pydantic import BaseModel

from api import app, neural_network
from api import NEURAL_NETWORK, app
from api.config import CONFIDENCE, FRAMERATE, FRAMES_AFTER, FRAMES_BEFORE, OFFSET
from api.models.highlight import Highlight
from api.models.segment import Segment
Expand Down Expand Up @@ -84,7 +84,7 @@ async def post_highlights_segments_generate() -> None:
extract_segments,
kwargs={
"highlight": highlight,
"neural_network": neural_network,
"neural_network": NEURAL_NETWORK,
"confidence": CONFIDENCE,
"framerate": FRAMERATE,
"offset": OFFSET,
Expand Down
1 change: 1 addition & 0 deletions crispy-api/api/tools/enums.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,4 @@ class SupportedGames(str, Enum):
VALORANT = "valorant"
OVERWATCH = "overwatch"
CSGO2 = "csgo2"
THEFINALS = "thefinals"
7 changes: 6 additions & 1 deletion crispy-api/api/tools/image.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,9 @@ def compare_image(path1: str, path2: str) -> bool:
data1 = np.asarray(blur1)
data2 = np.asarray(blur2)

return bool((1 + np.corrcoef(data1.flat, data2.flat)[0, 1]) / 2 > 0.8)
# https://stackoverflow.com/questions/51248810/python-why-would-numpy-corrcoef-return-nan-values
corrcoef = np.corrcoef(data1.flat, data2.flat)
if np.isnan(corrcoef).all(): # pragma: no cover
return True

return bool((1 + corrcoef[0, 1]) / 2 > 0.8)
Loading

0 comments on commit 5fcc780

Please sign in to comment.