-
Notifications
You must be signed in to change notification settings - Fork 0
/
bbob2023.qmd
257 lines (213 loc) · 16.1 KB
/
bbob2023.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
# GECCO Workshop on Black-Box Optimization Benchmarking (BBOB 2023) {#bbob2023page}
Welcome to the web page of the 12th GECCO Workshop on Black-Box
Optimization Benchmarking (BBOB 2023) which took place during GECCO 2023.
> **WORKSHOP ON BLACK-BOX OPTIMIZATION BENCHMARKING (BBOB 2023)**
>
> | held as part of the
> |
> | **2023 Genetic and Evolutionary Computation Conference
> (GECCO-2023)**
> | July 15\--19, Lisbon, Portugal
> | <https://gecco-2023.sigevo.org>
| Submission opening: February 13, 2023
| Submission deadline: April 17, 2023 AoE (not extendable, was: April
14)
| Notification: May 3, 2023
| Camera-ready: May 10, 2023
| Presenter mandatory registration: May 10, 2023
------------------------------------------------------- ------------------------------------------------------------------------ -----------------------------------------------------------------
[register for news](http://numbbo.github.io/register) [COCO quick start (scroll down a bit)](https://github.com/numbbo/coco) [latest COCO release](https://github.com/numbbo/coco/releases/)
------------------------------------------------------- ------------------------------------------------------------------------ -----------------------------------------------------------------
<br /><br />
Benchmarking optimization algorithms is a crucial part in the design and
application of them in practice. Since 2009, the Blackbox Optimization
Benchmarking Workshop at GECCO has been a place to discuss general
recent advances of benchmarking practices and the concrete results from
actual benchmarking experiments with a large variety of (blackbox)
optimizers.
The Comparing Continuous Optimizers platform (COCO[^1],
<https://github.com/numbbo/coco>) has been developed in this context to
support algorithm developers and practicioners alike by automating
benchmarking experiments for blackbox optimization algorithms in single-
and bi-objective, unconstrained continuous problems in exact and noisy,
as well as expensive and non-expensive scenarios. A new bbob-constrained
test suite has been released in 2022.
For the next BBOB 2023 edition of the workshop, we invite participants
to discuss all kind of aspects of (blackbox) benchmarking but welcome in
particular contributions related to constrained optimization. As in
previous years, presenting benchmarking results on the supported test
suites of COCO are a focus, but submissions are not limited to those
topics:
- single-objective unconstrained problems (bbob)
- single-objective unconstrained problems with noise (bbob-noisy)
- biobjective unconstrained problems (bbob-biobj)
- large-scale single-objective problems (bbob-largescale) and
- mixed-integer single- and bi-objective problems (bbob-mixint and
bbob-biobj-mixint)
- constrained optimization (bbob-constrained)
We encourage particularly submissions about algorithms from outside the
evolutionary computation community and papers analyzing the large amount
of already publicly available algorithm data of COCO (see
<https://numbbo.github.io/data-archive/>). Like for the previous
editions, we will provide source code in various languages (C/C++,
Matlab/Octave, Java, and Python) to benchmark algorithms on the various
test suites mentioned. Postprocessing data and comparing algorithm
performance will be equally automatized with COCO (up to already
prepared ACM-compliant LaTeX templates for writing papers).
For more details, please see below.
## Accepted papers
- Dimo Brockhoff, Pascal Capetillo, Jonathan Hornewall, Raphael
Walker: Benchmarking the Borg algorithm on the Biobjective
bbob-biobj Testbed
- Óscar Espinoza, Katya Rodríguez-Vázquez, Carlos Ignacio
Hernández-Castellanos, Suemi Rodriguez-Romo: Comparison Of Three
Versions Of Whale Optimization Algorithm (WOA) On The Bbob Test
Suite
- Armand Gissler: Evaluation of the impact of various modifications on
CMA-ES for a theoretical perspective
- Victoria Johnson, João Duro, Visakan Kadirkamanathan, Robin
Purshouse: A distributed multi-disciplinary design optimization
benchmark test suite with constraints and multiple conflicting
objectives
- Jakub Kudela: Benchmarking State-of-the-art DIRECT-type Methods on
the BBOB Noiseless Testbed
- Tristan Marty, Yann Semet, Anne Auger, Sébastien Héron, Nikolaus
Hansen: Benchmarking CMA-ES with Basic Integer Handling on a
Mixed-Integer Test Problem Suite
## Schedule
The BBOB-2023 workshop got assigned the two middle slots on Saturday,
July 15, 2024 at GECCO in which the talks were scheduled
according to the tables below. For both slots, the room was Porto (F13).
Speakers are highlighted with a star behind the name. Please click on the
provided links to download the slides if available.
<div style="margin-left: auto;
margin-right: auto;
width: 100%">
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **Session I** | |
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10:40 - 11:15 | The BBOBies: Introduction to Blackbox Optimization Benchmarking ([slides](./presentations/2023-GECCO/101_Dimo_bbob-2023-intro.pdf)) |
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 11:15 - 11:40 | Tristan Marty\*, Yann Semet, Anne Auger, Sébastien Héron, Nikolaus Hansen: Benchmarking CMA-ES with Basic Integer Handling on a Mixed-Integer Test Problem Suite |
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 11:40 - 12:05 | Dimo Brockhoff, Pascal Capetillo, Jonathan Hornewall\*, Raphael Walker: Benchmarking the Borg algorithm on the Biobjective bbob-biobj Testbed ([slides](./presentations/2023-GECCO/103_Jonathan_Borg.pdf)) |
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 12:05 - 12:30 | Victoria Johnson\*, João Duro, Visakan Kadirkamanathan, Robin Purshouse: A Distributed Multi-Disciplinary Design Optimization Benchmark Test Suite with Constraints and Multiple Conflicting Objectives |
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
<div>
<div style="margin-left: auto;
margin-right: auto;
width: 100%">
+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **Session II** | |
+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 14:00 - 14:05 | The BBOBies: Introduction to Blackbox Optimization Benchmarking ([slides](./presentations/2023-GECCO/201_Dimo_bbob-2023-introveryshort.pdf)) |
+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 14:05 - 14:30 | Óscar Espinoza, Katya Rodríguez-Vázquez, Carlos Hernández-Castellanos, Suemi Rodriguez-Romo: Comparison Of Three Versions Of Whale Optimization Algorithm (WOA) On The Bbob Test Suite |
+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 14:30 - 14:55 | Armand Gissler\*: Evaluation of the impact of various modifications to CMA-ES that facilitate its theoretical analysis ([slides](./presentations/2023-GECCO/203_Armand_modificationsToCMA-ES.pdf)) |
+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 14:55 - 15:20 | Jakub Kudela\*: Benchmarking State-of-the-art DIRECT-type Methods on the BBOB Noiseless Testbed ([slides](./presentations/2023-GECCO/204_Jakub_DIRECTtypeMethods.pdf)) |
+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 15:20 - 15:30 | The BBOBies: The COCO data archive and This Year’s Results ([slides](./presentations/2023-GECCO/205_Dimo_bbob-2023-summary.pdf)) |
+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 15:30 - 15:50 | The BBOBies: Wrap-up and Open Discussion |
+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
</div>
## Updates and News
Get updated about the latest news regarding the workshop and releases
and bugfixes of the supporting NumBBO/COCO platform, by registering at
<http://numbbo.github.io/register>.
## Submissions
We encourage any submission that is concerned with black-box
optimization benchmarking of continuous optimizers, for example papers
that:
- describe and benchmark new or not-so-new algorithms on one of the
above testbeds,
- compare new or existing algorithms from the COCO/BBOB database[^2],
- analyze the data obtained in previous editions of BBOB[^3], or
- discuss, compare, and improve upon any benchmarking methodology for
continuous optimizers such as design of experiments, performance
measures, presentation methods, benchmarking frameworks, test
functions, \...
Paper submissions are expected to be done through the official GECCO
submission system at <https://ssl.linklings.net/conferences/gecco/>
until the deadline. ACM-compliant LaTeX templates are available in the
github repository under
[code-postprocessing/latex-templates/](https://github.com/numbbo/coco/tree/master/code-postprocessing/latex-templates).
In order to finalize your submission, we kindly ask you to submit your
data files if this applies by clicking on \"Submit a COCO data set\"
here: <https://github.com/numbbo/coco/issues/new/choose>. To upload your
data to the web, you might want to use <https://zenodo.org/> which
offers uploads of data sets up to 50GB in size or any other provider of
online data storage.
## Supporting material
The basis of the workshop is the Comparing Continuous Optimizer platform
(<https://github.com/numbbo/coco>), written in ANSI C with other
languages calling the C code. Languages currently available are C, Java,
MATLAB/Octave, and Python.
Most likely, you want to read the [COCO quick
start](https://github.com/numbbo/coco) (scroll down a bit). This page
also provides the code for the benchmark functions[^4], for running the
experiments in C, Java, Matlab, Octave, and Python, and for
postprocessing the experiment data into plots, tables, html pages, and
publisher-conform PDFs via provided LaTeX templates. Please refer to
<http://numbbo.github.io/coco-doc/experimental-setup/> for more details
on the general experimental set-up for black-box optimization
benchmarking.
The latest (hopefully) stable release of the COCO software can be
downloaded as a whole [here](https://github.com/numbbo/coco/releases/).
Please use at least version v2.6.3 for running your benchmarking
experiments in 2023.
Documentation of the functions used in the different test suites can be
found here:
- `bbob` suite at
<https://numbbo.github.io/gforge/downloads/download16.00/bbobdocfunctions.pdf>
- `bbob-noisy` suite at
<http://coco.lri.fr/downloads/download15.03/bbobdocnoisyfunctions.pdf>
- `bbob-biobj` suite at <https://numbbo.github.io/bbob-biobj/>
- `bbob-largescale` suite at <https://arxiv.org/pdf/1903.06396.pdf>
- `bbob-mixint` and `bbob-biobj-mixint` suites at
<https://hal.inria.fr/hal-02067932/document> and at
<https://numbbo.github.io/gforge/preliminary-bbob-mixint-documentation/bbob-mixint-doc.pdf>
- `bbob-constrained` suite at:
<http://numbbo.github.io/coco-doc/bbob-constrained/>
## Important Dates
- **2023-04-17** *paper and data submission deadline* (not extendable,
was: April 14)
- **2023-05-03** decision notification
- **2023-05-10** deadline camera-ready papers
- **2023-05-10** deadline author registration
- **2023-07-15** or **2023-07-16** workshop
All dates are given in ISO 8601 format (yyyy-mm-dd).
## Organizers
- Anne Auger, Inria and CMAP, Ecole Polytechnique, Institut
Polytechnique de Paris, France
- Dimo Brockhoff, Inria and CMAP, Ecole Polytechnique, Institut
Polytechnique de Paris, France
- Paul Dufossé, Inria and Thales Defense Mission Systems, France
- Nikolaus Hansen, Inria and CMAP, Ecole Polytechnique, Institut
Polytechnique de Paris, France
- Olaf Mersmann, TU Köln, Germany
- Petr Pošík, Czech Technical University, Czech Republic
- Tea Tušar, Jozef Stefan Institute (JSI), Slovenia
[^1]: Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea
Tušar, and Dimo Brockhoff. \"COCO: A platform for comparing
continuous optimizers in a black-box setting.\" Optimization Methods
and Software (2020): 1-31.
[^2]: The data of previously compared algorithms can be found at
<https://numbbo.github.io/data-archive> and are easily accessible by
name in the `cocopp` post-processing and from the python
`cocopp.archives` module or in (fixed) html form at
<https://numbbo.github.io/ppdata-archive>.
[^3]: The data of previously compared algorithms can be found at
<https://numbbo.github.io/data-archive> and are easily accessible by
name in the `cocopp` post-processing and from the python
`cocopp.archives` module or in (fixed) html form at
<https://numbbo.github.io/ppdata-archive>.
[^4]: Note that the current release of the new COCO platform does not
contain the original noisy BBOB testbed yet, such that you must use
the old code at
<https://numbbo.github.io/coco/oldcode/bboball15.03.tar.gz> for the
time being if you want to compare your algorithm on the noisy
testbed.