Latest Post

After the Realme Note 50, two Realme Note Series phones are planned for 2024; however, they may not reach India Fossil to Quit Smartwatch Business; Company’s Gen 6 Hybrid Will Be Last: Source

A Windows computer with an RTX 30 or 40 series GPU and at least 8GB of VRAM is needed to run Nvidia’s Chat with RTX.

Nvidia has introduced Chat with RTX, an AI-powered chatbot that operates locally on a PC without requiring an Internet connection. With its cutting-edge AI chips driving AI goods and services, the GPU manufacturer has led the AI sector since the generative AI boom. Additionally, Nvidia offers businesses end-to-end solutions through its AI platform. The company’s first product is Chat with RTX, and it is currently developing its own chatbots. There is a free demo version of the Nvidia chatbot accessible right now.

On Tuesday, February 13, Nvidia unveiled the tool, referring to it as a personalized AI chatbot. In order to download the program, users must have a Windows workstation or PC running an RTX 30 or 40-series GPU with at least 8GB of VRAM. With just a few clicks, the application may be installed and utilized immediately after downloading.

Chat with RTX is a local chatbot, meaning it has no awareness of the outside world. On the other hand, users can customize it to do queries on their own personal data, including papers, files, and more. A possible application of this would be to provide it with copious amounts of work-related papers, and then request that it summarize, evaluate, or respond to a particular query that could require hours of human search. Similarly, skimming through several studies and publications can be a useful research technique. Text, PDF, DOC/DOCX, and XML file formats are supported. The AI bot can also answer questions and provide a summary of the video by accepting URLs for YouTube videos and playlists and using the transcriptions of the videos. For

According to the sample video, Chat with RTX is just a Python instance running on a Web server that lacks the data of a large language model (LLM) upon initial download. After training it with either the Mistral or Llama 2 models, users can execute queries on their own data. According to the business, the chatbot’s functionality is derived on open-source projects including TensorRT-LLM, RTX acceleration, and retrieval-augmented generation (RAG).

The Verge reported that the program has a size of about 40GB and that the Python instance can take up to 3GB of RAM. The article specifically brought up the chatbot’s creation of JSON files inside the directories you ask it to index. Therefore, it could be problematic to give it your whole document folder or a large parent folder.

Source Link

Leave a Reply

Your email address will not be published. Required fields are marked *