Skip to content

Commit 31b0f3d

Browse files
authored
Merge pull request #507 from neurodata/staging
v0.0.6
2 parents 894d259 + 634d4d1 commit 31b0f3d

File tree

73 files changed

+53979
-4695
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

73 files changed

+53979
-4695
lines changed

Diff for: .circleci/config.yml

+17-7
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
version: 2.1
22

33
orbs:
4-
codecov: codecov/codecov@1.0.2
4+
codecov: codecov/codecov@3.1.1
55

66
jobs:
77
build:
@@ -34,6 +34,12 @@ jobs:
3434
parameters:
3535
module:
3636
type: string
37+
benchmarks:
38+
type: string
39+
experiments:
40+
type: string
41+
tutorials:
42+
type: string
3743
docker:
3844
- image: cimg/python:3.8
3945
steps:
@@ -56,6 +62,9 @@ jobs:
5662
command: |
5763
. venv/bin/activate
5864
black --check --diff ./<< parameters.module >>
65+
black --check --diff ./<< parameters.benchmarks >>
66+
black --check --diff ./<< parameters.experiments >>
67+
black --check --diff ./<< parameters.tutorials >>
5968
- run:
6069
name: run tests and coverage
6170
command: |
@@ -95,12 +104,12 @@ jobs:
95104
name: init .pypirc
96105
command: |
97106
echo -e "[pypi]" >> ~/.pypirc
98-
echo -e "username = $PYPI_USERNAME" >> ~/.pypirc
99-
echo -e "password = $PYPI_PASSWORD" >> ~/.pypirc
107+
echo -e "username = __token__" >> ~/.pypirc
108+
echo -e "password = $PIP_TOKEN" >> ~/.pypirc
100109
- run:
101110
name: create packages
102111
command: |
103-
python setup.py sdist
112+
python setup.py sdist bdist_wheel
104113
- run:
105114
name: upload to pypi
106115
command: |
@@ -121,13 +130,14 @@ workflows:
121130
- test-module:
122131
name: "proglearn"
123132
module: "proglearn"
133+
benchmarks: "benchmarks/"
134+
experiments: "docs/experiments/"
135+
tutorials: "docs/tutorials/"
124136
requires:
125137
- "v3.8"
126138
- deploy:
127-
requires:
128-
- "proglearn"
129139
filters:
130140
tags:
131-
only: /[0-9]+(\.[0-9]+)*/
141+
only: /v[0-9]+(\.[0-9]+)*/
132142
branches:
133143
ignore: /.*/

Diff for: .github/ISSUE_TEMPLATE/bug_report.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
name: Bug Report
33
about: Create a report to help us improve ProgLearn
4-
label: bug
4+
label:
55

66
---
77

Diff for: .github/ISSUE_TEMPLATE/documentation_fix.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
name: Documentation Fix
33
about: Create a report to help us improve the documentation of ProgLearn
4-
label: documentation
4+
label:
55

66
---
77

Diff for: .github/ISSUE_TEMPLATE/feature_request.md

+13-5
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,25 @@
11
---
22
name: Feature Request
33
about: Suggest an idea for ProgLearn
4-
label: enhancement
4+
label:
55

66
---
77

8-
**Is your feature request related to a problem? Please describe.**
8+
<!--
9+
Thank you for taking the time to file a bug report.
10+
Please fill in the fields below, deleting the sections that
11+
don't apply to your issue. You can view the final output
12+
by clicking the preview button above.
13+
Note: This is a comment, and won't appear in the output.
14+
-->
915

16+
#### Is your feature request related to a problem? Please describe.
1017

11-
**Describe the solution you'd like**
1218

19+
#### Describe the solution you'd like
1320

14-
**Describe alternatives you've considered**
1521

22+
#### Describe alternatives you've considered
1623

17-
**Additional context (e.g. screenshots)**
24+
25+
#### Additional context (e.g. screenshots)

Diff for: CITATION.cff

+7-3
Original file line numberDiff line numberDiff line change
@@ -63,12 +63,15 @@ authors:
6363
affiliation: "Johns Hopkins University, Baltimore, MD"
6464
family-names: Priebe
6565
given-names: Carey
66-
cff-version: "1.1.0"
66+
cff-version: "1.2.0"
6767
date-released: 2021-09-18
6868
identifiers:
6969
-
7070
type: url
7171
value: "https://arxiv.org/pdf/2004.12908.pdf"
72+
-
73+
type: doi
74+
value: 10.5281/zenodo.4060264
7275
keywords:
7376
- Python
7477
- classification
@@ -77,8 +80,9 @@ keywords:
7780
- "transfer learning"
7881
- "domain adaptation"
7982
license: MIT
80-
message: "If you use this software, please cite it using these metadata."
83+
doi: 10.5281/zenodo.4060264
84+
message: "If you use ProgLearn, please cite it using these metadata."
8185
repository-code: "https://github.com/neurodata/ProgLearn"
8286
title: "Omnidirectional Transfer for Quasilinear Lifelong Learning"
83-
version: "0.0.5"
87+
version: "0.0.6"
8488
...

Diff for: LICENSE

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
MIT License
22

3-
Copyright (c) 2020 Dr. Joshua T. Vogelstein
3+
Copyright (c) 2020 Neurodata
44

55
Permission is hereby granted, free of charge, to any person obtaining a copy
66
of this software and associated documentation files (the "Software"), to deal

Diff for: README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,4 +22,4 @@
2222
Some system/package requirements:
2323
- **Python**: 3.6+
2424
- **OS**: All major platforms (Linux, macOS, Windows)
25-
- **Dependencies**: keras, scikit-learn, scipy, numpy, joblib
25+
- **Dependencies**: tensorflow, scikit-learn, scipy, numpy, joblib

Diff for: benchmarks/cifar_exp/appendix_tables.ipynb

+88-60
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,8 @@
2323
"import pickle\n",
2424
"import matplotlib.pyplot as plt\n",
2525
"from matplotlib import rcParams\n",
26-
"rcParams.update({'figure.autolayout': True})\n",
26+
"\n",
27+
"rcParams.update({\"figure.autolayout\": True})\n",
2728
"import numpy as np\n",
2829
"from itertools import product\n",
2930
"import seaborn as sns\n",
@@ -39,77 +40,80 @@
3940
"\n",
4041
"#%%\n",
4142
"def unpickle(file):\n",
42-
" with open(file, 'rb') as fo:\n",
43-
" dict = pickle.load(fo, encoding='bytes')\n",
43+
" with open(file, \"rb\") as fo:\n",
44+
" dict = pickle.load(fo, encoding=\"bytes\")\n",
4445
" return dict\n",
4546
"\n",
47+
"\n",
4648
"def get_fte_bte(err, single_err, ntrees):\n",
4749
" bte = [[] for i in range(10)]\n",
4850
" te = [[] for i in range(10)]\n",
4951
" fte = []\n",
50-
" \n",
52+
"\n",
5153
" for i in range(10):\n",
52-
" for j in range(i,10):\n",
53-
" #print(err[j][i],j,i)\n",
54-
" bte[i].append(err[i][i]/err[j][i])\n",
55-
" te[i].append(single_err[i]/err[j][i])\n",
56-
" \n",
54+
" for j in range(i, 10):\n",
55+
" # print(err[j][i],j,i)\n",
56+
" bte[i].append(err[i][i] / err[j][i])\n",
57+
" te[i].append(single_err[i] / err[j][i])\n",
58+
"\n",
5759
" for i in range(10):\n",
58-
" #print(single_err[i],err[i][i])\n",
59-
" fte.append(single_err[i]/err[i][i])\n",
60-
" \n",
61-
" \n",
62-
" return fte,bte,te\n",
60+
" # print(single_err[i],err[i][i])\n",
61+
" fte.append(single_err[i] / err[i][i])\n",
6362
"\n",
64-
"def calc_mean_bte(btes,task_num=10,reps=6):\n",
65-
" mean_bte = [[] for i in range(task_num)]\n",
63+
" return fte, bte, te\n",
6664
"\n",
6765
"\n",
66+
"def calc_mean_bte(btes, task_num=10, reps=6):\n",
67+
" mean_bte = [[] for i in range(task_num)]\n",
68+
"\n",
6869
" for j in range(task_num):\n",
6970
" tmp = 0\n",
7071
" for i in range(reps):\n",
7172
" tmp += np.array(btes[i][j])\n",
72-
" \n",
73-
" tmp=tmp/reps\n",
73+
"\n",
74+
" tmp = tmp / reps\n",
7475
" mean_bte[j].extend(tmp)\n",
75-
" \n",
76-
" return mean_bte \n",
7776
"\n",
78-
"def calc_mean_te(tes,task_num=10,reps=6):\n",
77+
" return mean_bte\n",
78+
"\n",
79+
"\n",
80+
"def calc_mean_te(tes, task_num=10, reps=6):\n",
7981
" mean_te = [[] for i in range(task_num)]\n",
8082
"\n",
8183
" for j in range(task_num):\n",
8284
" tmp = 0\n",
8385
" for i in range(reps):\n",
8486
" tmp += np.array(tes[i][j])\n",
85-
" \n",
86-
" tmp=tmp/reps\n",
87+
"\n",
88+
" tmp = tmp / reps\n",
8789
" mean_te[j].extend(tmp)\n",
88-
" \n",
89-
" return mean_te \n",
9090
"\n",
91-
"def calc_mean_fte(ftes,task_num=10,reps=6):\n",
91+
" return mean_te\n",
92+
"\n",
93+
"\n",
94+
"def calc_mean_fte(ftes, task_num=10, reps=6):\n",
9295
" fte = np.asarray(ftes)\n",
93-
" \n",
94-
" return list(np.mean(np.asarray(fte_tmp),axis=0))\n",
9596
"\n",
96-
"def calc_mean_err(err,task_num=10,reps=6):\n",
97-
" mean_err = [[] for i in range(task_num)]\n",
97+
" return list(np.mean(np.asarray(fte_tmp), axis=0))\n",
9898
"\n",
9999
"\n",
100+
"def calc_mean_err(err, task_num=10, reps=6):\n",
101+
" mean_err = [[] for i in range(task_num)]\n",
102+
"\n",
100103
" for j in range(task_num):\n",
101104
" tmp = 0\n",
102105
" for i in range(reps):\n",
103106
" tmp += np.array(err[i][j])\n",
104-
" \n",
105-
" tmp=tmp/reps\n",
106-
" #print(tmp)\n",
107+
"\n",
108+
" tmp = tmp / reps\n",
109+
" # print(tmp)\n",
107110
" mean_err[j].extend([tmp])\n",
108-
" \n",
109-
" return mean_err \n",
111+
"\n",
112+
" return mean_err\n",
113+
"\n",
110114
"\n",
111115
"#%%\n",
112-
"reps = slots*shifts\n",
116+
"reps = slots * shifts\n",
113117
"\n",
114118
"btes = [[] for i in range(task_num)]\n",
115119
"ftes = [[] for i in range(task_num)]\n",
@@ -121,39 +125,49 @@
121125
"fte_tmp = [[] for _ in range(reps)]\n",
122126
"err_tmp = [[] for _ in range(reps)]\n",
123127
"\n",
124-
"count = 0 \n",
128+
"count = 0\n",
125129
"for slot in range(slots):\n",
126130
" for shift in range(shifts):\n",
127-
" filename = 'result/'+model+str(ntrees)+'_'+str(shift+1)+'_'+str(slot)+'.pickle'\n",
131+
" filename = (\n",
132+
" \"result/\"\n",
133+
" + model\n",
134+
" + str(ntrees)\n",
135+
" + \"_\"\n",
136+
" + str(shift + 1)\n",
137+
" + \"_\"\n",
138+
" + str(slot)\n",
139+
" + \".pickle\"\n",
140+
" )\n",
128141
" multitask_df, single_task_df = unpickle(filename)\n",
129142
"\n",
130143
" err = [[] for _ in range(10)]\n",
131144
"\n",
132145
" for ii in range(10):\n",
133146
" err[ii].extend(\n",
134-
" 1 - np.array(\n",
135-
" multitask_df[multitask_df['base_task']==ii+1]['accuracy']\n",
136-
" )\n",
147+
" 1\n",
148+
" - np.array(\n",
149+
" multitask_df[multitask_df[\"base_task\"] == ii + 1][\"accuracy\"]\n",
150+
" )\n",
137151
" )\n",
138-
" single_err = 1 - np.array(single_task_df['accuracy'])\n",
139-
" fte, bte, te = get_fte_bte(err,single_err,ntrees)\n",
140-
" \n",
152+
" single_err = 1 - np.array(single_task_df[\"accuracy\"])\n",
153+
" fte, bte, te = get_fte_bte(err, single_err, ntrees)\n",
154+
"\n",
141155
" err_ = [[] for i in range(task_num)]\n",
142156
" for i in range(task_num):\n",
143-
" for j in range(task_num-i):\n",
144-
" #print(err[i+j][i])\n",
145-
" err_[i].append(err[i+j][i])\n",
146-
" \n",
157+
" for j in range(task_num - i):\n",
158+
" # print(err[i+j][i])\n",
159+
" err_[i].append(err[i + j][i])\n",
160+
"\n",
147161
" te_tmp[count].extend(te)\n",
148162
" bte_tmp[count].extend(bte)\n",
149163
" fte_tmp[count].extend(fte)\n",
150164
" err_tmp[count].extend(err_)\n",
151-
" count+=1\n",
152-
" \n",
153-
"te = calc_mean_te(te_tmp,reps=reps)\n",
154-
"bte = calc_mean_bte(bte_tmp,reps=reps)\n",
155-
"fte = calc_mean_fte(fte_tmp,reps=reps)\n",
156-
"error = calc_mean_err(err_tmp,reps=reps)"
165+
" count += 1\n",
166+
"\n",
167+
"te = calc_mean_te(te_tmp, reps=reps)\n",
168+
"bte = calc_mean_bte(bte_tmp, reps=reps)\n",
169+
"fte = calc_mean_fte(fte_tmp, reps=reps)\n",
170+
"error = calc_mean_err(err_tmp, reps=reps)"
157171
]
158172
},
159173
{
@@ -162,9 +176,11 @@
162176
"metadata": {},
163177
"outputs": [],
164178
"source": [
165-
"flat_te_per_rep = [[np.mean(te_tmp[rep][i]) for i in range(len(te))] for rep in range(reps)]\n",
166-
"mean_te_per_rep = np.mean(flat_te_per_rep, axis = 1)\n",
167-
"min_te_per_rep = np.min(flat_te_per_rep, axis = 1)"
179+
"flat_te_per_rep = [\n",
180+
" [np.mean(te_tmp[rep][i]) for i in range(len(te))] for rep in range(reps)\n",
181+
"]\n",
182+
"mean_te_per_rep = np.mean(flat_te_per_rep, axis=1)\n",
183+
"min_te_per_rep = np.min(flat_te_per_rep, axis=1)"
168184
]
169185
},
170186
{
@@ -211,8 +227,16 @@
211227
}
212228
],
213229
"source": [
214-
"print(\"Mean FTE(Task 10): ({} +- {})\".format(np.mean(task_fte_task_10_per_rep), np.std(task_fte_task_10_per_rep)))\n",
215-
"print(\"Mean BTE(Task 1): ({} +- {})\".format(np.mean(task_bte_task_1_per_rep), np.std(task_bte_task_1_per_rep)))"
230+
"print(\n",
231+
" \"Mean FTE(Task 10): ({} +- {})\".format(\n",
232+
" np.mean(task_fte_task_10_per_rep), np.std(task_fte_task_10_per_rep)\n",
233+
" )\n",
234+
")\n",
235+
"print(\n",
236+
" \"Mean BTE(Task 1): ({} +- {})\".format(\n",
237+
" np.mean(task_bte_task_1_per_rep), np.std(task_bte_task_1_per_rep)\n",
238+
" )\n",
239+
")"
216240
]
217241
},
218242
{
@@ -240,7 +264,11 @@
240264
"source": [
241265
"for task in range(task_num):\n",
242266
" final_te_of_task_per_rep = [te_tmp[rep][task][-1] for rep in range(reps)]\n",
243-
" print(\"Final TE of Task {}: ({} +- {})\".format(task, np.mean(final_te_of_task_per_rep), np.std(final_te_of_task_per_rep)))"
267+
" print(\n",
268+
" \"Final TE of Task {}: ({} +- {})\".format(\n",
269+
" task, np.mean(final_te_of_task_per_rep), np.std(final_te_of_task_per_rep)\n",
270+
" )\n",
271+
" )"
244272
]
245273
},
246274
{

0 commit comments

Comments
 (0)