Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

brittle parser in tgd_optimizer #246

Open
mrdrprofuroboros opened this issue Oct 29, 2024 · 1 comment
Open

brittle parser in tgd_optimizer #246

mrdrprofuroboros opened this issue Oct 29, 2024 · 1 comment

Comments

@mrdrprofuroboros
Copy link

Describe the bug

TypeError                                 Traceback (most recent call last)
Cell In[7], line 1
----> 1 trainer.fit(
      2     train_dataset=filtered_dataset.train,
      3     val_dataset=filtered_dataset.val,
      4     test_dataset=filtered_dataset.test,
      5     resume_from_ckpt=\"/Users/imatas/.adalflow/ckpt/TransitionCorrectnessAdalComponent/constrained_max_steps_3_72e3f_run_1.json\",
      6 )

File ~/opt/anaconda3/envs/aimon/lib/python3.11/site-packages/adalflow/optim/trainer/trainer.py:479, in Trainer.fit(self, adaltask, train_loader, train_dataset, val_dataset, test_dataset, debug, save_traces, raw_shots, bootstrap_shots, resume_from_ckpt)
    477     starting_step += self.max_steps
    478 elif self.strategy == \"constrained\":
--> 479     trainer_results = self._fit_text_grad_constraint(
    480         train_loader,
    481         val_dataset,
    482         test_dataset,
    483         trainer_results=trainer_results,
    484         starting_step=starting_step,
    485     )
    486     starting_step += self.max_steps
    487 else:

File ~/opt/anaconda3/envs/aimon/lib/python3.11/site-packages/adalflow/optim/trainer/trainer.py:1781, in Trainer._fit_text_grad_constraint(self, train_loader, val_dataset, test_dataset, trainer_results, starting_step)
   1775 all_losses.extend(losses)
   1776 all_y_preds.extend(
   1777     [y.full_response for y in y_preds if isinstance(y, Parameter)]
   1778 )
   1780 all_samples, all_losses, all_y_preds = (
-> 1781     self._text_grad_constraint_propose_step(
   1782         steps=steps,
   1783         all_samples=all_samples,
   1784         all_losses=all_losses,
   1785         all_y_preds=all_y_preds,
   1786     )
   1787 )
   1789 # check optimizer stages to see if the proposal was accepted so far
   1790 if not self._check_optimizer_proposal():

File ~/opt/anaconda3/envs/aimon/lib/python3.11/site-packages/adalflow/optim/trainer/trainer.py:1659, in Trainer._text_grad_constraint_propose_step(self, steps, all_samples, all_losses, all_y_preds, include_demo_optimizers)
   1654 tdqm_loader = tqdm(range(self.max_proposals_per_step), desc=\"Proposing\")
   1655 for i in tdqm_loader:
   1656 
   1657     # print(f\"Proposing step: {i}\")
   1658     # self.optimizer.propose()
-> 1659     self._propose_text_optimizers()  # new prompts
   1660     if include_demo_optimizers:
   1661         self._demo_optimizers_propose()

File ~/opt/anaconda3/envs/aimon/lib/python3.11/site-packages/adalflow/optim/trainer/trainer.py:859, in Trainer._propose_text_optimizers(self)
    857 def _propose_text_optimizers(self):
    858     for text_optimizer in self.text_optimizers:
--> 859         text_optimizer.propose()

File ~/opt/anaconda3/envs/aimon/lib/python3.11/site-packages/adalflow/optim/text_grad/tgd_optimizer.py:334, in TGDOptimizer.propose(self)
    331 log.info(f\"Response from the optimizer: {response}\")
    332 # extract the improved variable from the response
    333 # TODO: make it more robust
--> 334 improved_variable = extract_new_variable(proposed_data)
    335 param.propose_data(improved_variable)
    336 if self.do_gradient_memory:

File ~/opt/anaconda3/envs/aimon/lib/python3.11/site-packages/adalflow/optim/text_grad/tgd_optimizer.py:129, in extract_new_variable(text)
    126 pattern = re.compile(r\"<VARIABLE>(.*?)</VARIABLE>\", re.DOTALL)
    128 # Find all matches
--> 129 matches = pattern.findall(text)
    131 if len(matches) == 0:
    132     return text.strip()

TypeError: expected string or bytes-like object, got 'NoneType'"

To Reproduce
trainer.fit with openai gpt-4o

Expected behavior
graceful handling

@mrdrprofuroboros
Copy link
Author

ah, found the core issue

Error code: 429 - {'error': {'message': 'Request too large for gpt-4o in organization ... on tokens per min (TPM): Limit 30000, Requested 37640. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}

shall we propogate that somehow to user? might not worth the effort. reducing the optimization prompt size would be a good idea though :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant