Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project Proposal Feedback #3

Open
pooyanjamshidi opened this issue Oct 7, 2024 · 0 comments
Open

Project Proposal Feedback #3

pooyanjamshidi opened this issue Oct 7, 2024 · 0 comments
Assignees

Comments

@pooyanjamshidi
Copy link

Feedback on Aphasia Simulation Proposal:

You’ve got an intriguing and unique project here! The idea of simulating aphasia in large language models (LLMs) by “damaging” their internal weights is not only creative but could also lead to novel insights in both AI and cognitive science. Here’s a breakdown of my thoughts and some suggestions for improvement:

Strengths:

  • Novel Research Idea: There’s something very exciting about the idea of manipulating LLMs to simulate aphasia-like behaviors. Since there’s no direct research in this area, your project is clearly novel, which gives it a lot of potential.
  • Methodology with Concrete Steps: I like how you’ve laid out the process, starting with selecting an LLM and verifying its output on image captioning tests. This method of establishing a healthy baseline before simulating damage to the model’s weights is a solid plan.
  • Practical Application: Simulating aphasia in a computational model could provide valuable insights into language processing both in AI and in human conditions. You’ve also mentioned that even negative results can contribute to the field, which is a great attitude for experimental research.

Suggestions:

  1. Clarifying the Connection to Brain Function:

    • You’ve acknowledged that there isn’t a direct physical model linking the brain and LLMs. While this is true, you might want to dive a bit deeper into why LLMs are still a useful analogy. Maybe focus more on how both the brain and LLMs process information in complex, hierarchical ways (e.g., layers in LLMs vs brain regions). Drawing a stronger conceptual parallel between the two could help strengthen the motivation for your project.
  2. Layer Targeting Strategy:

    • Your methodology mentions using random search or adding noise to specific layers to simulate damage, but I’d suggest refining this approach. Are certain layers of the LLM more likely to affect language generation when perturbed? A little research into which layers are more critical for specific language functions (e.g., syntax, grammar) could help you target the “damage” more effectively and simulate more realistic aphasia symptoms.

    Also, check out Yann LeCun’s "Optimal Brain Damage" and related works in model pruning—these techniques focus on strategically removing parts of neural networks to maintain performance while reducing size. This could give you insights into which weights or layers are more critical, helping you apply more targeted damage to simulate aphasia more realistically. There are also a lot of connections between pruning and understanding neural networks, which might give you an interesting angle on your project.

  3. Training the Classifier:

    • Building a classifier for aphasia types (non-aphasia, non-fluent aphasia, fluent aphasia) is a solid step, but make sure you clearly define how this classifier will be trained and tested. How will you ensure the classifier can generalize well? Consider providing more details on how you’ll utilize data from AphasiaBank and C-STAR to fine-tune the classifier. It might be worth running a few small preliminary experiments to verify that these datasets map well to the LLM outputs you're working with.
  4. Simulating Different Types of Aphasia:

    • It would be interesting to explore how specific modifications to the LLM might result in different types of aphasia-like behavior. For instance, what happens when you zero out different weights—could this simulate different kinds of speech deficits? You could extend your exploration by experimenting with different methods of altering weights to see if different aphasia types can be emulated more accurately.
  5. Evaluation and Iteration:

    • You mentioned adjusting the weights with noise or zeroing out weights, which is a good start. It might help to introduce some level of iterative experimentation. For example, after the initial random search, you could refine the damage using gradient-based methods or a more targeted approach. This will allow for more controlled experimentation rather than relying solely on randomness.
  6. Ethical Considerations:

    • Though your project is more focused on AI, there’s an interesting ethical component regarding the potential future use of this kind of research in medical fields. It might be worth discussing any potential applications in rehabilitation or therapy, or how this research could influence the development of tools for people with aphasia.
  7. Alternative LLMs:

    • You’ve mentioned several LLMs (Flamingo, MiniGPT-4, BLIP-2, LLaVA), and that’s great. However, since this is a research project, I would recommend choosing models that are well-documented and easier to work with. Maybe start with simpler, lightweight models for initial prototyping and then test the more advanced ones once you get promising results.

Additional Thoughts:

  • Visualizations of Damage: As part of your final deliverables, consider including visualizations that show how different levels of “damage” to the LLM impact language outputs. This could help communicate your findings more effectively, especially if you're simulating different types of aphasia.

  • Cross-Disciplinary Insights: If possible, consider collaborating with researchers in neuroscience or cognitive science to help ground your simulations in real-world aphasia research. Even informal discussions with experts in these fields could help you make your results more compelling.


Conclusion:

You’ve got a strong and creative project on your hands. The novel aspect of “damaging” an LLM to simulate aphasia could open up some interesting avenues in both AI and cognitive neuroscience. Focusing on targeted weight modifications, refining your evaluation approach, and building a solid classifier will be key to driving the project forward. Overall, I’m really looking forward to seeing where this goes—good luck with the next steps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants