This is a demo for our URIAL paper that enables base LLMs to chat with in-context alignment. You can talk directly with base, untuned LLMs to find out what knowledge and skills they have already learned from pre-training instead of SFT or xPO or RLHF. Also, you can use this to explore the pre-training data of base LLMs by chatting! I found a very interesting case: Base version of Llama-3-8B often thinks it is built by OpenAI, lol.