I often post my hypotheses on mindset, failures, success, growth in Linkedin.
This time I decided to let it be reviewed by LLMs ChatGPT and Bard.
Hypothesis: Patience and Calm mind is the strategy. Rate this on different parameters of a hypothesis in scale 1-10
My understanding and follow up
I was puzzled by the testability score gap between the 2 models.
ChatGPT had 3/10
Bard had 7/10
As I read through, I learned that chatGPT was considering that Calm mind and patience is subjective and can not be measured. Hence the Hypothesis can not be tested.
Whereas Bard assumed that Calm mind and Patience if can be measured will lead to better the testability of the success of those who practice it.
ChatGPT does not make the assumption that mindset and Calmness can be measured objectively.
Bard makes the assumption and goes on to correlate the success with the assumptions.
Both are correct.
So Why would different models have different response?
Obviously they have different data sets.
What other factors that affect them?
Here is bard's response:
Here is chatGPT's response: