Skip to content

Commit 1277d2d

Browse files
authored
Example structure refactor (#2303)
Signed-off-by: chensuyue <[email protected]>
1 parent 81beafc commit 1277d2d

File tree

1,744 files changed

+26065
-26065
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,744 files changed

+26065
-26065
lines changed

.azure-pipelines/model-test-3x.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ pr:
1111
- neural_compressor/common
1212
- neural_compressor/torch
1313
- neural_compressor/transformers
14-
- examples/3.x_api/pytorch/nlp/huggingface_models/language-modeling/quantization/weight_only
14+
- examples/deprecated/pytorch/nlp/huggingface_models/language-modeling/quantization/weight_only
1515
- setup.py
1616
- requirements_pt.txt
1717
- .azure-pipelines/scripts/models

.azure-pipelines/model-test.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ pr:
1414
- .azure-pipelines/model-test.yml
1515
- .azure-pipelines/template/docker-template.yml
1616
- .azure-pipelines/scripts/models
17-
- examples/tensorflow/oob_models/quantization/ptq
17+
- examples/deprecated/tensorflow/oob_models/quantization/ptq
1818
- .azure-pipelines/model-test.yml
1919
- .azure-pipelines/scripts/fwk_version.sh
2020
- .azure-pipelines/scripts/install_nc.sh

.azure-pipelines/scripts/models/env_setup.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,13 +51,13 @@ SCRIPTS_PATH="/neural-compressor/.azure-pipelines/scripts/models"
5151
log_dir="/neural-compressor/.azure-pipelines/scripts/models"
5252
if [[ "${inc_new_api}" == "3x"* ]]; then
5353
pip install cmake==3.31.6
54-
WORK_SOURCE_DIR="/neural-compressor/examples/3.x_api/${framework}"
54+
WORK_SOURCE_DIR="/neural-compressor/examples/${framework}"
5555
git clone https://github.com/intel/intel-extension-for-transformers.git /itrex
5656
cd /itrex
5757
pip install -r requirements.txt
5858
pip install -v .
5959
else
60-
WORK_SOURCE_DIR="/neural-compressor/examples/${framework}"
60+
WORK_SOURCE_DIR="/neural-compressor/examples/deprecated/${framework}"
6161
fi
6262

6363
$BOLD_YELLOW && echo "processing ${framework}-${fwk_ver}-${model}" && $RESET

.azure-pipelines/scripts/models/run_model_trigger_common.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,9 +58,9 @@ function check_results() {
5858
log_dir="/neural-compressor/.azure-pipelines/scripts/models"
5959
SCRIPTS_PATH="/neural-compressor/.azure-pipelines/scripts/models"
6060
if [[ "${inc_new_api}" == "3x"* ]]; then
61-
WORK_SOURCE_DIR="/neural-compressor/examples/3.x_api/${framework}"
62-
else
6361
WORK_SOURCE_DIR="/neural-compressor/examples/${framework}"
62+
else
63+
WORK_SOURCE_DIR="/neural-compressor/examples/deprecated/${framework}"
6464
fi
6565
$BOLD_YELLOW && echo "processing ${framework}-${fwk_ver}-${model}" && $RESET
6666

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ In particular, the tool provides the key features, typical examples, and open co
2222
* Support a wide range of Intel hardware such as [Intel Gaudi Al Accelerators](https://www.intel.com/content/www/us/en/products/details/processors/ai-accelerators/gaudi-overview.html), [Intel Core Ultra Processors](https://www.intel.com/content/www/us/en/products/details/processors/core-ultra.html), [Intel Xeon Scalable Processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html), [Intel Xeon CPU Max Series](https://www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html), [Intel Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/flex-series.html), and [Intel Data Center GPU Max Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/max-series.html) with extensive testing;
2323
support AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime with limited testing; support NVidia GPU for some WOQ algorithms like AutoRound and HQQ.
2424

25-
* Validate popular LLMs such as [LLama2](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [Falcon](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [GPT-J](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [Bloom](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [OPT](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), and more than 10,000 broad models such as [Stable Diffusion](/examples/pytorch/nlp/huggingface_models/text-to-image/quantization), [BERT-Large](/examples/pytorch/nlp/huggingface_models/text-classification/quantization/ptq_static/fx), and [ResNet50](/examples/pytorch/image_recognition/torchvision_models/quantization/ptq/cpu/fx) from popular model hubs such as [Hugging Face](https://huggingface.co/), [Torch Vision](https://pytorch.org/vision/stable/index.html), and [ONNX Model Zoo](https://github.com/onnx/models#models), with automatic [accuracy-driven](/docs/source/design.md#workflow) quantization strategies
25+
* Validate popular LLMs such as [LLama2](/examples/deprecated/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [Falcon](/examples/deprecated/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [GPT-J](/examples/deprecated/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [Bloom](/examples/deprecated/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [OPT](/examples/deprecated/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), and more than 10,000 broad models such as [Stable Diffusion](/examples/deprecated/pytorch/nlp/huggingface_models/text-to-image/quantization), [BERT-Large](/examples/deprecated/pytorch/nlp/huggingface_models/text-classification/quantization/ptq_static/fx), and [ResNet50](/examples/deprecated/pytorch/image_recognition/torchvision_models/quantization/ptq/cpu/fx) from popular model hubs such as [Hugging Face](https://huggingface.co/), [Torch Vision](https://pytorch.org/vision/stable/index.html), and [ONNX Model Zoo](https://github.com/onnx/models#models), with automatic [accuracy-driven](/docs/source/design.md#workflow) quantization strategies
2626

2727
* Collaborate with cloud marketplaces such as [Google Cloud Platform](https://console.cloud.google.com/marketplace/product/bitnami-launchpad/inc-tensorflow-intel?project=verdant-sensor-286207), [Amazon Web Services](https://aws.amazon.com/marketplace/pp/prodview-yjyh2xmggbmga#pdp-support), and [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bitnami.inc-tensorflow-intel), software platforms such as [Alibaba Cloud](https://www.intel.com/content/www/us/en/developer/articles/technical/quantize-ai-by-oneapi-analytics-on-alibaba-cloud.html), [Tencent TACO](https://new.qq.com/rain/a/20221202A00B9S00) and [Microsoft Olive](https://github.com/microsoft/Olive), and open AI ecosystem such as [Hugging Face](https://huggingface.co/blog/intel), [PyTorch](https://pytorch.org/tutorials/recipes/intel_neural_compressor_for_pytorch.html), [ONNX](https://github.com/onnx/models#models), [ONNX Runtime](https://github.com/microsoft/onnxruntime), and [Lightning AI](https://github.com/Lightning-AI/lightning/blob/master/docs/source-pytorch/advanced/post_training_quantization.rst)
2828

docs/source/3x/PT_FP8Quant.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ During runtime, Intel Neural Compressor will detect hardware automatically and t
9191

9292
## Get Start with FP8 Quantization
9393
[Demo Usage](https://github.com/intel/neural-compressor?tab=readme-ov-file#getting-started)
94-
[Computer vision example](../../../examples/3.x_api/pytorch/cv/fp8_quant)
94+
[Computer vision example](../../../examples/pytorch/cv/fp8_quant)
9595

9696
## Optimum-habana LLM example
9797
### Overview

docs/source/3x/PT_MXQuant.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ user_model = convert(model=user_model)
9595

9696
## Examples
9797

98-
- PyTorch [huggingface models](/examples/3.x_api/pytorch/nlp/huggingface_models/language-modeling/quantization/mx_quant)
98+
- PyTorch [huggingface models](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/mx_quant)
9999

100100

101101
## Reference

docs/source/benchmark.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,4 +57,4 @@ fit(model="./int8.pb", conf=conf, b_dataloader=eval_dataloader)
5757

5858
## Examples
5959

60-
Refer to the [Benchmark example](../../examples/helloworld/tf_example5).
60+
Refer to the [Benchmark example](../../examples/deprecated/helloworld/tf_example5).

docs/source/distillation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ model = training_func_for_nc(model)
107107
eval_func(model)
108108
```
109109

110-
For Intermediate Layer Knowledge Distillation or Self Distillation, the only difference to above launcher code is that `distil_loss_conf` should be set accordingly as shown below. More detailed settings can be found in this [example](../../examples/pytorch/nlp/huggingface_models/text-classification/optimization_pipeline/distillation_for_quantization/fx/run_glue_no_trainer.py#L510) for Intermediate Layer Knowledge Distillation and this [example](../../examples/pytorch/image_recognition/torchvision_models/self_distillation/eager/main.py#L344) for Self Distillation.
110+
For Intermediate Layer Knowledge Distillation or Self Distillation, the only difference to above launcher code is that `distil_loss_conf` should be set accordingly as shown below. More detailed settings can be found in this [example](../../examples/deprecated/pytorch/nlp/huggingface_models/text-classification/optimization_pipeline/distillation_for_quantization/fx/run_glue_no_trainer.py#L510) for Intermediate Layer Knowledge Distillation and this [example](../../examples/deprecated/pytorch/image_recognition/torchvision_models/self_distillation/eager/main.py#L344) for Self Distillation.
111111

112112
```python
113113
from neural_compressor.config import (
@@ -122,8 +122,8 @@ distil_loss_conf = IntermediateLayersKnowledgeDistillationLossConfig(layer_mappi
122122
distil_loss_conf = SelfKnowledgeDistillationLossConfig(layer_mappings=layer_mappings)
123123
```
124124
## Examples
125-
[Distillation PyTorch Examples](../../examples/README.md#distillation-1)
125+
[Distillation PyTorch Examples](../../examples/deprecated/README.md#distillation-1)
126126
<br>
127-
[Distillation TensorFlow Examples](../../examples/README.md#distillation)
127+
[Distillation TensorFlow Examples](../../examples/deprecated/README.md#distillation)
128128
<br>
129129
[Distillation Examples Results](./validated_model_list.md#validated-knowledge-distillation-examples)

docs/source/mixed_precision.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -160,8 +160,8 @@ converted_model.save("./path/to/save/")
160160

161161
## Examples
162162

163-
- Quick started with [helloworld example](/examples/helloworld/tf_example3)
164-
- PyTorch [ResNet18](/examples/pytorch/image_recognition/torchvision_models/mixed_precision/resnet18)
165-
- IPEX [DistilBERT base](/examples/pytorch/nlp/huggingface_models/question-answering/mixed_precision/ipex)
166-
- Tensorflow [ResNet50](/examples/tensorflow/image_recognition/tensorflow_models/resnet50_v1/mixed_precision)
167-
- ONNX Runtime [Bert base](/examples/onnxrt/nlp/huggingface_model/text_classification/mix_precision)
163+
- Quick started with [helloworld example](/examples/deprecated/helloworld/tf_example3)
164+
- PyTorch [ResNet18](/examples/deprecated/pytorch/image_recognition/torchvision_models/mixed_precision/resnet18)
165+
- IPEX [DistilBERT base](/examples/deprecated/pytorch/nlp/huggingface_models/question-answering/mixed_precision/ipex)
166+
- Tensorflow [ResNet50](/examples/deprecated/tensorflow/image_recognition/tensorflow_models/resnet50_v1/mixed_precision)
167+
- ONNX Runtime [Bert base](/examples/deprecated/onnxrt/nlp/huggingface_model/text_classification/mix_precision)

0 commit comments

Comments
 (0)