GeniSysAI LLMCore is the successor to the GeniSysAI NLU Engine, the retrieval based natural language understanding engine that was the core of the GeniSysAI network. GeniSysAI LLMCore takes advantage of modern day AI technologies, replacing traditional natural language understanding with powerful generative AI models and modern day libraries.
GeniSysAI LLMCore currently supports the following hardware:
Project Status: In Development
The first version of LLMCore is built for running on Intel® AI PC, supporting Intel® Core™ Ultra CPUs and NPUs, and Intel® Arc™ GPUs. You can find out more about Intel® AI PCs on our article here.
OpenVINO™ is an open-source toolkit by Intel, designed to optimize and deploy deep learning models across a range of tasks including computer vision, automatic speech recognition, generative AI, and natural language processing. It supports models built with frameworks like PyTorch, TensorFlow, ONNX, and Keras.
OpenVINO™ GenAI is designed to simplify the process of running generative AI models, giving you access to top Generative AI models with optimized pipelines, efficient execution methods, and sample implementations. It abstracts away the complexity of the generation pipeline, letting you focus on providing the model and input context while OpenVINO handles tokenization, executes the generation loop on your device, and returns the results.
The project currently allows for you to set up the basics of LLMCore for Intel AI PC. The current functionality and documentation allows for you to use LLama 3.2, modify the system prompt, and communicate with the LLM locally.
To get started with LLMCore For Intel AI PC follow the installation guide.