Listen now | Last week we started to mess around with some methods of doing sentiment analysis and setting up some frameworks to work on that type of effort. This week we take a little different approach and are going to look at an election model. I’m actively working on election focused prompt based training for large language models for better predictions. Right now I have access to Bard, ChatGPT, and Llama 2 to complete that training. Completing that type of training requires feeding election models in written form as a prompt for replication. I have been including the source data and written out logic as a part of the prompt as well.
This is very interesting information. I feel like the data needs to be weighted based on the input in a variety of ways. Age, location, party line. There are demographics that might share information or partake in surveys more than others so the data output is skewed based on the input if there is not a large enough of varying / variety of input.
This is very interesting information. I feel like the data needs to be weighted based on the input in a variety of ways. Age, location, party line. There are demographics that might share information or partake in surveys more than others so the data output is skewed based on the input if there is not a large enough of varying / variety of input.