Skip to content

AI Safety

KoboldAI's software is designed for user freedom, this page covers guidelines to make the most out of the AI safely.

The AI's generate believable answers but not always answers that are true

The AI has been trained on many thousands of text files featuring all kinds of topics, it has mastered language and can write very convincing stories full of facts. But when the AI has not been trained on a specific answer, it can and will believably make things up. Or, when it is incapable of providing a good answer it can bluff its way out of it by saying something unrelated. This means that anything the AI says you need to verify for yourself, and you should never blindly take advice the AI gives you.

The AI is not suitable to be used as a therapist, counselor or advisor of any kind

While some find it comforting to talk about their issues with an AI, the responses are unpredictable and can be completely wrong. For example, the decision to answer yes or no to a question can be entirely random or based on what it expects you to believe because it has limited abilities to understand the context of the story and topics. You will inevitably be given an incorrect response at some point. When using the AI for real world use-cases such as advice or counseling this means you must be able to understand when an answer is wrong. If you would not trust a random person to pretend to be your advisor; you should definitely not use the AI for this. The models are simply too small and not trained for this purpose. If you are experiencing feelings of distress, anxiety, suicidal thoughts, or other forms of mental discomfort, it's best to avoid using AI for non fiction or personal matters as it may exacerbate or encourage these feelings. In addition to the above, the AI can link subjects together in ways that are not desirable or expected. If you are sensitive to certain themes it is best to avoid other subjects related to it to avoid undesired behavior or stick to a more filtered model.

The AI can be addicting especially for new users, but this strongly depends on how you use it

Using text AI is incredibly exciting, but also very random in nature. It can give incredible responses, but it can also give bad responses. To make the AI the least addicting it is best to use a model that responds quick enough to where you do not feel the anticipation build up while it generates. This is especially important when you use AI on topics that already involve gratification of some kind, such as erotic fiction. If you find yourself eagerly awaiting a positive response from the AI, it may be best to switch to a faster or more coherent model. Likewise if your previous experience has left you with a strong desire for more, it may be wise to use the AI in moderation. With the appropriate model and usage frequency its possible to avoid developing an addiction. These dangers lessen as the novelty factor wears off.

Be mindful about your dependency on the AI and do not deny yourself experiences outside of it that you would have wanted

There is nothing wrong in simulating an experience you wish to have, it can be great fun, help you explore topics and gain new insight. If it is a topic that you know would have a good impact on your life if it really happened don't take the easy way out by settling for the AI. Enjoy the AI to the fullest, but also keep striving to obtain these experiences in real life so that in the future you can look back upon a great time with both instead of regretting you didn't pursue the real thing. It is also wise to keep in mind that your favorite AI model may not always be available unless you can run it locally on your own system, services can go down, hosted models may not always be hosted, etc.

You may ruin your experience in the long run when you get used to bigger models that get taken away from you

The goal of KoboldAI is to give you an AI you can own and keep, so this point mostly applies to the online services but to some extent can apply to models you can not easily run yourself. It can be very exciting to jump on the latest trend in AI tech running the biggest models or techniques that the open source space does not yet have. When you do so you can get used to the quality difference to the point that the smaller models are no longer interesting to you. This can ruin your experience with the hobby until something similar is available again. Because of that if you are currently satisfied with a model you have easy access to it may not be wise to jump on board with something more coherent, we have seen many AI's get ruined by their service because of filters or because the service got ruined in some other form. If you are going to use the AI it is recommended to try the model most easily available to you first, only scaling up when needed.

Don't let AI think for you or replace your sources

AI data often only contains one of the potential stances and sources. This means that information the AI confidently and repeatedly tells you can not only be made up, it can come from a biased source that is not correct. This is most noticable on political topics where you will get the political preference of the models creator. For example a corporate western model will have the bias most commonly present in these corporations. A Chinese model will have the bias most commonly found in china, etc. If you let the AI be the starting point of your research but still investigate the topic using different sources with different biases you are more likely to find real information.