There is another worry with respect to open sourcing large language models. It stems from the view that their research in general, needs to slow down or be halted. This is because there are risks involved in continuing to roll out increasingly capable models and features, all the while our understanding of these models are very limited. …
There is another worry with respect to open sourcing large language models. It stems from the view that their research in general, needs to slow down or be halted. This is because there are risks involved in continuing to roll out increasingly capable models and features, all the while our understanding of these models are very limited. We currently don't know how to align the models in a way where they won't kill us once they are smart enough. Since we only have 1 go at tackling this problem, we need much more time and research.
Therefore, open sourcing models will ultimately accelerate LLM development and make it harder to regulate and put any risk prevention guardrails in place. In doing so, we'd simply be accelerating out extinction.
This is a view that Eliezer Yudkowsky for instances, but also many others hold (simplifying and his view is much more nuanced and well put together than my summary attempt).
I personally think that the more researchers are thinking about the problem, the better. Progress and decisions around LLMs shouldn't be limited to a few companies. While the risks mentioned above are real, I believe that we need the community as a whole to be thinking and working on how to implement safe AI. Also, the current market incentives are such that top tech companies are prone to ethics washing. They'll be more inclined to release models before any serious testing is done in order to capture more market share and establish themselves in the future.
Ultimately it is a super difficult ethical and moral problem that I am only starting to wrap my head around. I'm still very uncertain about which option is better between close vs open source models.
Thank you for your work and promoting discussions around this topic.
Thank you for sharing this Julien, there is certainly a lot to unpack and our goal with these questions and discussions is to elevate the level of conversation and inject nuance into the discussion. We appreciate your engagement and hope others can chime in with their thoughts as well. Thanks!
There is another worry with respect to open sourcing large language models. It stems from the view that their research in general, needs to slow down or be halted. This is because there are risks involved in continuing to roll out increasingly capable models and features, all the while our understanding of these models are very limited. We currently don't know how to align the models in a way where they won't kill us once they are smart enough. Since we only have 1 go at tackling this problem, we need much more time and research.
Therefore, open sourcing models will ultimately accelerate LLM development and make it harder to regulate and put any risk prevention guardrails in place. In doing so, we'd simply be accelerating out extinction.
This is a view that Eliezer Yudkowsky for instances, but also many others hold (simplifying and his view is much more nuanced and well put together than my summary attempt).
I personally think that the more researchers are thinking about the problem, the better. Progress and decisions around LLMs shouldn't be limited to a few companies. While the risks mentioned above are real, I believe that we need the community as a whole to be thinking and working on how to implement safe AI. Also, the current market incentives are such that top tech companies are prone to ethics washing. They'll be more inclined to release models before any serious testing is done in order to capture more market share and establish themselves in the future.
Ultimately it is a super difficult ethical and moral problem that I am only starting to wrap my head around. I'm still very uncertain about which option is better between close vs open source models.
Thank you for your work and promoting discussions around this topic.
Thank you for sharing this Julien, there is certainly a lot to unpack and our goal with these questions and discussions is to elevate the level of conversation and inject nuance into the discussion. We appreciate your engagement and hope others can chime in with their thoughts as well. Thanks!