|
Post by Patrick on Feb 20, 2020 9:28:08 GMT
If someone believes that fixing the hole is not a good idea, he isn't employing common sense. That is your common sense and not his. It wouldn’t be anyone’s common sense because the notion of a universality pertaining to any sort of sense would be false since there is only each persons unique “sense”. I do do agree that common sense as it is usually considered, is a vague and relative idea and is different in different time periods and cultures. That is different from Consciousness though, which obviously all humans have.
|
|
|
Post by Arlon10 on Feb 20, 2020 10:26:48 GMT
There is always suspicion, more or less. You would convict on suspicion? That's scary. One of us will have to read the thread either again or for the first time. It's in there somewhere. In that case, it would seem that it did indeed get lost. That's a pity, but I assume it's probably just a longer, more complicated version of mine. Scary? Funny, that's what I was thinking. Who said anything about conviction? Hint: you. My solution, scroll past "his" solution. Go quickly since the Lords of Private Schooling might zap it away.
|
|
|
Post by Arlon10 on Feb 20, 2020 10:37:03 GMT
I'm sorry I gave away an actual solution to the Monty Hall problem. I doubt many people will find it before it scrolls away though. Meanwhile I must say you are doing a great job of discouraging people from finding answers on the internet, copying and pasting for their homework. You truly make the internet a PoS. My concern is that people who believe what you and your associates say are voting and that is ruining the country. An uninformed or misinformed populace will ruin democracy. You are not my teacher obviously. If my guess is correct you don't really want to be. If you don't really want to be, rest assured that will never happen. LMAO that you "gave it away." The solution has been out there for years, long before you and I started discussing it. For your sake (and for Admin's) I will post both of our solutions so they will be easy to find. You can lie and I say I "copy and pasted" all you want. I didn't. I've known about Bayes for a long time, long before I read/heard about Monty Hall, and when I did discover Monty Hall it was a straight-forward application for me. You can also "suspect" me of having socks. I do not. This is my one and only account here and on every forum I've ever been on. I take no delight in trolling or in playing "characters" other than myself on message boards. If I wanted to play someone other than myself, I'd start up a local D&D game or take an improv/acting class. Don't rush on my behalf. "Lord of the Board" are you? Do you speak for all 13 people here?
|
|
|
Post by general313 on Feb 20, 2020 17:29:09 GMT
As I was thinking about the energy "problem" and the computer simulation equivalent, where multiple copies of an object or thread consume more and more computing resources, it occurred to me that a quantum computer wouldn't have this problem. It would come naturally to a quantum computer to handle a multiplicity of threads simultaneously, with no change in the load/demand on computing resources. Maybe the universe is a large quantum computer? Maybe it will be possible to start out with an assumption-less version of QM, apply some knowledge learned from quantum computing to arrive at a MW scenario. It's certainly possible. That's what physicists like David Deutsch are hoping for with the advent of quantum computing. Because of this discussion I actually had a listen to a talk Sean Carroll on the history of QM last night, and one point he made about the conservation of energy is that really there is only one universal wavefunction for everything, and when worlds "split" what's happening is that each world "slice" is getting thinner and thinner. Most of the other stuff he said I already knew, but it was still interesting. The ending Q&A was especially fun because he was asked a lot of questions where he had to say "we don't know," which gives you an idea of just how many unknowns there still are at the frontier of physics. I also liked what he said about trying to derive gravity from QM rather than trying to quantize a classical conception of gravity. I took a quick peek at the beginning of the Jaynes pdf, noticed this very nice quote from one of my heroes, James Clerk Maxwell, written in 1850.
|
|
|
Post by Admin on Feb 20, 2020 23:15:30 GMT
You would convict on suspicion? That's scary. In that case, it would seem that it did indeed get lost. That's a pity, but I assume it's probably just a longer, more complicated version of mine. Scary? Funny, that's what I was thinking. Who said anything about conviction? Hint: you. You were accusing somebody of something, no? To be honest, I'm not really interested. I was just being polite. Who are the Lords of Private Schooling?
|
|
|
Post by Eva Yojimbo on Feb 21, 2020 2:15:35 GMT
LMAO that you "gave it away." The solution has been out there for years, long before you and I started discussing it. For your sake (and for Admin's) I will post both of our solutions so they will be easy to find. You can lie and I say I "copy and pasted" all you want. I didn't. I've known about Bayes for a long time, long before I read/heard about Monty Hall, and when I did discover Monty Hall it was a straight-forward application for me. You can also "suspect" me of having socks. I do not. This is my one and only account here and on every forum I've ever been on. I take no delight in trolling or in playing "characters" other than myself on message boards. If I wanted to play someone other than myself, I'd start up a local D&D game or take an improv/acting class. Don't rush on my behalf. "Lord of the Board" are you? Do you speak for all 13 people here? It wasn't difficult to find the first post where you "solved it." If that's not the post you had in mind, it was surely one of the few immediately after. Don't know why doing this makes me "Lord of the Board," but you didn't seem like you were going to.
|
|
|
Post by Eva Yojimbo on Feb 21, 2020 2:17:56 GMT
So you're getting thrown by a basic subjective/objective distinction? If I find your explanation complicated and define it to be such, then you made it complicated as defined by that. Those two statements mean the same thing. We can disagree about that definition of "complicated" but that's how it works since nothing is innately "complicated" without our subjective say-so. Ok, so you're merely stating my explanation is complicated via your definition yet you compared it as more complicated than your explanation defined complicated by another, along with a summary of how yours was less complicated. So the intent of your comment is to set up a comparison even though nowhere in my posts have I implicitly or explicitly stated that your posts are complicated nor mine not uncomplicated nor mine less complicated than yours and so haven't disagreed at all with your definition of complicated nor have I implied that your definition of complication is not subjective You haven't said anything specific or general about my original post that caused you to comment on it in the first place such as maybe a theory/method to facilitate translation that, when applied to my post, causes you to reason it complicated. If it wasn't anything specific or general about my original post then why comment on it in the first place? If it was then I don't understand your reluctance to state this specific or general something about my post, surely it would have been entailed in the reason why you commented in the first place and so no effort to recite. If I haven't been clear before then I'll attempt clarity now: What was specific or general about my post that caused you to comment on it? Interaction with others not being a necessity for board attendance notwithstanding, not wanting to nor indicating to leave it to others to arbitrate which post was more complicated isn't not wanting to communicate with people in general. But apologies for accusing you of those fallacies because I would never in a million years thought that the intent of your reply to me was to forward a comparison and then leave it to others to arbitrate which post was more complicated. I actually thought you was making a rational critique. At this point I'm regretting commenting on your post at all since this seems to be going nowhere. To answer the question of WHY I commented, call it generosity. I recognized that you put actual effort/thought into it, but I figured it would likely be ignored because most wouldn't understand it. Indeed, all you got was a snarky "A for effort" from goz. At least I articulated that I found it difficult, implying that maybe it would be worth your while to "translate it" so others could better understand it. I figured that you'd prefer someone commenting, even if just to say they found it too complicated, than to just ignore it altogether, as most did.
|
|
|
Post by goz on Feb 21, 2020 2:51:12 GMT
Ok, so you're merely stating my explanation is complicated via your definition yet you compared it as more complicated than your explanation defined complicated by another, along with a summary of how yours was less complicated. So the intent of your comment is to set up a comparison even though nowhere in my posts have I implicitly or explicitly stated that your posts are complicated nor mine not uncomplicated nor mine less complicated than yours and so haven't disagreed at all with your definition of complicated nor have I implied that your definition of complication is not subjective You haven't said anything specific or general about my original post that caused you to comment on it in the first place such as maybe a theory/method to facilitate translation that, when applied to my post, causes you to reason it complicated. If it wasn't anything specific or general about my original post then why comment on it in the first place? If it was then I don't understand your reluctance to state this specific or general something about my post, surely it would have been entailed in the reason why you commented in the first place and so no effort to recite. If I haven't been clear before then I'll attempt clarity now: What was specific or general about my post that caused you to comment on it? Interaction with others not being a necessity for board attendance notwithstanding, not wanting to nor indicating to leave it to others to arbitrate which post was more complicated isn't not wanting to communicate with people in general. But apologies for accusing you of those fallacies because I would never in a million years thought that the intent of your reply to me was to forward a comparison and then leave it to others to arbitrate which post was more complicated. I actually thought you was making a rational critique. At this point I'm regretting commenting on your post at all since this seems to be going nowhere. To answer the question of WHY I commented, call it generosity. I recognized that you put actual effort/thought into it, but I figured it would likely be ignored because most wouldn't understand it. Indeed, all you got was a snarky "A for effort" from goz. At least I articulated that I found it difficult, implying that maybe it would be worth your while to "translate it" so others could better understand it. I figured that you'd prefer someone commenting, even if just to say they found it too complicated, than to just ignore it altogether, as most did. Excuse me! I was not at all 'snarky' and I appreciated the effort. We then went on to have a frank discussion about length breadth and efficiency complete with floral and decorative inputs.
|
|
|
Post by Eva Yojimbo on Feb 21, 2020 2:52:55 GMT
At this point I'm regretting commenting on your post at all since this seems to be going nowhere. To answer the question of WHY I commented, call it generosity. I recognized that you put actual effort/thought into it, but I figured it would likely be ignored because most wouldn't understand it. Indeed, all you got was a snarky "A for effort" from goz. At least I articulated that I found it difficult, implying that maybe it would be worth your while to "translate it" so others could better understand it. I figured that you'd prefer someone commenting, even if just to say they found it too complicated, than to just ignore it altogether, as most did. Excuse me! I was not at all 'snarky' and I appreciated the effort. We then went on to have a frank discussion about length breadth and efficiency complete with floral and decorative inputs. My mistake! Sounded kinda snarky to me. In the US phrases like that are often used sarcastically or patronizingly.
|
|
|
Post by goz on Feb 21, 2020 2:55:49 GMT
Excuse me! I was not at all 'snarky' and I appreciated the effort. We then went on to have a frank discussion about length breadth and efficiency complete with floral and decorative inputs. My mistake! Sounded kinda snarky to me. In the US phrases like that are often used sarcastically or patronizingly. The nuance in the interchange, was involving an A vs. an A+. IMHO the A was well earned.
|
|
|
Post by Aj_June on Feb 22, 2020 14:15:43 GMT
Eva YojimboHave you read about prospect theory? Can you give me your understanding of the same in simple language?
|
|
|
Post by Eva Yojimbo on Feb 23, 2020 3:05:17 GMT
Eva Yojimbo Have you read about prospect theory? Can you give me your understanding of the same in simple language? I've come across the term when reading Kahneman, but I haven't really studied it. Looking it up, it seems like a pretty common idea that I happen to see in my profession of poker where people assess potential gains/losses very differently, and those biases lead to decision errors in terms of what would produce the most positive expected value. If I were to try to put it simply it would be: 1. People tend to be more averse to loss than attracted to gain 2. This makes people less willing to take risks if it would mean a big loss, even if the potential gain could more than compensate 3. People also tend to prefer options with less risk (more certainty), even if those options produce less expected value (EV) than riskier options If we take a basic example, if you ask people to risk $1000 to win $1001 on a coin-flip, a great many people wouldn't take that bet, even though it has an EV of +$0.50. This is because the thought of losing $1000 (to many) is stronger than the thought of gaining $1001. Likewise, if you presented an option where a person could choose to win $500, or one in which they either win $1500 67% of the time, or lose $500 33% of the time, most would take the former option, even though the latter option is +$335 better than the first. It also seems, just reading on Wikipedia, that Kahneman/Tversky have a theorem that actually models this behavior. I'd probably have to really dig into more of Kahneman's scholarly work to go much further with this.
|
|
|
Post by Arlon10 on Feb 24, 2020 22:50:40 GMT
Eva Yojimbo Have you read about prospect theory? Can you give me your understanding of the same in simple language? I've come across the term when reading Kahneman, but I haven't really studied it. Looking it up, it seems like a pretty common idea that I happen to see in my profession of poker where people assess potential gains/losses very differently, and those biases lead to decision errors in terms of what would produce the most positive expected value. If I were to try to put it simply it would be: 1. People tend to be more averse to loss than attracted to gain 2. This makes people less willing to take risks if it would mean a big loss, even if the potential gain could more than compensate 3. People also tend to prefer options with less risk (more certainty), even if those options produce less expected value (EV) than riskier options If we take a basic example, if you ask people to risk $1000 to win $1001 on a coin-flip, a great many people wouldn't take that bet, even though it has an EV of +$0.50. This is because the thought of losing $1000 (to many) is stronger than the thought of gaining $1001. Likewise, if you presented an option where a person could choose to win $500, or one in which they either win $1500 67% of the time, or lose $500 33% of the time, most would take the former option, even though the latter option is +$335 better than the first. It also seems, just reading on Wikipedia, that Kahneman/Tversky have a theorem that actually models this behavior. I'd probably have to really dig into more of Kahneman's scholarly work to go much further with this. The Devil, you say.
|
|
|
Post by Aj_June on Feb 24, 2020 23:02:43 GMT
Eva Yojimbo Have you read about prospect theory? Can you give me your understanding of the same in simple language? I've come across the term when reading Kahneman, but I haven't really studied it. Looking it up, it seems like a pretty common idea that I happen to see in my profession of poker where people assess potential gains/losses very differently, and those biases lead to decision errors in terms of what would produce the most positive expected value. If I were to try to put it simply it would be: 1. People tend to be more averse to loss than attracted to gain 2. This makes people less willing to take risks if it would mean a big loss, even if the potential gain could more than compensate 3. People also tend to prefer options with less risk (more certainty), even if those options produce less expected value (EV) than riskier options If we take a basic example, if you ask people to risk $1000 to win $1001 on a coin-flip, a great many people wouldn't take that bet, even though it has an EV of +$0.50. This is because the thought of losing $1000 (to many) is stronger than the thought of gaining $1001. Likewise, if you presented an option where a person could choose to win $500, or one in which they either win $1500 67% of the time, or lose $500 33% of the time, most would take the former option, even though the latter option is +$335 better than the first. It also seems, just reading on Wikipedia, that Kahneman/Tversky have a theorem that actually models this behavior. I'd probably have to really dig into more of Kahneman's scholarly work to go much further with this. Thanks for your post, Eva. It's good to know about it from your perspective given that you are a poker player. I am a bit covered in other tasks but I will return to this on the weekend. It is actually a theory of massive importance. It is the theory that won Kahneman his noble prize. It also wrecks apart traditional economics central belief that humans are inherently rational. I will get back to it.
|
|
|
Post by general313 on Feb 25, 2020 1:07:58 GMT
Eva Yojimbo Have you read about prospect theory? Can you give me your understanding of the same in simple language? I've come across the term when reading Kahneman, but I haven't really studied it. Looking it up, it seems like a pretty common idea that I happen to see in my profession of poker where people assess potential gains/losses very differently, and those biases lead to decision errors in terms of what would produce the most positive expected value. If I were to try to put it simply it would be: 1. People tend to be more averse to loss than attracted to gain 2. This makes people less willing to take risks if it would mean a big loss, even if the potential gain could more than compensate 3. People also tend to prefer options with less risk (more certainty), even if those options produce less expected value (EV) than riskier options If we take a basic example, if you ask people to risk $1000 to win $1001 on a coin-flip, a great many people wouldn't take that bet, even though it has an EV of +$0.50. This is because the thought of losing $1000 (to many) is stronger than the thought of gaining $1001. Likewise, if you presented an option where a person could choose to win $500, or one in which they either win $1500 67% of the time, or lose $500 33% of the time, most would take the former option, even though the latter option is +$335 better than the first. It also seems, just reading on Wikipedia, that Kahneman/Tversky have a theorem that actually models this behavior. I'd probably have to really dig into more of Kahneman's scholarly work to go much further with this. I posted not too long ago here on the idea that this risk aversion plays a role in income inequality. It allows richer people to take bigger risks than poorer people, and thus over time amplifies the inequality. In the article I read it also mentions that practically every western democracy is finely balanced on tax policies that favor the rich, to a threshold of tolerance that if increased would increase the risk of revolt.
|
|
|
Post by Eva Yojimbo on Feb 25, 2020 2:18:28 GMT
I've come across the term when reading Kahneman, but I haven't really studied it. Looking it up, it seems like a pretty common idea that I happen to see in my profession of poker where people assess potential gains/losses very differently, and those biases lead to decision errors in terms of what would produce the most positive expected value. If I were to try to put it simply it would be: 1. People tend to be more averse to loss than attracted to gain 2. This makes people less willing to take risks if it would mean a big loss, even if the potential gain could more than compensate 3. People also tend to prefer options with less risk (more certainty), even if those options produce less expected value (EV) than riskier options If we take a basic example, if you ask people to risk $1000 to win $1001 on a coin-flip, a great many people wouldn't take that bet, even though it has an EV of +$0.50. This is because the thought of losing $1000 (to many) is stronger than the thought of gaining $1001. Likewise, if you presented an option where a person could choose to win $500, or one in which they either win $1500 67% of the time, or lose $500 33% of the time, most would take the former option, even though the latter option is +$335 better than the first. It also seems, just reading on Wikipedia, that Kahneman/Tversky have a theorem that actually models this behavior. I'd probably have to really dig into more of Kahneman's scholarly work to go much further with this. I posted not too long ago here on the idea that this risk aversion plays a role in income inequality. It allows richer people to take bigger risks than poorer people, and thus over time amplifies the inequality. In the article I read it also mentions that practically every western democracy is finely balanced on tax policies that favor the rich, to a threshold of tolerance that if increased would increase the risk of revolt. I've often thought of economics similarly to a concept in tournament poker about how the ideal strategy (conservative VS aggressive/risky) changes depending on your stack. In tournament poker there's always a value to just having chips, because it means you have some stake in the prizes/pay-outs just by being there. When you have a big stack, you have less risk of busting and losing all that value, so aggressive/risky play allows you to accumulate more and be dominant. OTOH, when you're a "short stack" you're in danger of losing all that value, so it's necessary to play conservative just to stay in the tournament at all. Ideally, you want to look for spots where you're highly probable to win in order to climb back up. Economics seem similar in that there's a certain value of just maintaining a certain level. Usually that level is just being able to live relatively comfortable, being able to afford necessities and such. If you get below that, it can be devastating and very, very difficult just to get back to that level. Once you get past that it's much easier to take that excess and be riskier with it in order to accumulate more, because losses don't hurt as much. So there's probably a reason why this bias exists. Even if we think of it in terms of evolution, conservative/risk-aversion attitudes help keep us alive; riskier attitudes merely have a chance of making things better, or ending disastrously. In economics (and poker) it's easy to just look at a situation and assess the EV in numbers, but when you factor in the rest of "life," as it were, things are rarely so simple.
|
|
|
Post by Arlon10 on Feb 25, 2020 2:53:27 GMT
I posted not too long ago here on the idea that this risk aversion plays a role in income inequality. It allows richer people to take bigger risks than poorer people, and thus over time amplifies the inequality. In the article I read it also mentions that practically every western democracy is finely balanced on tax policies that favor the rich, to a threshold of tolerance that if increased would increase the risk of revolt. I've often thought of economics similarly to a concept in tournament poker about how the ideal strategy (conservative VS aggressive/risky) changes depending on your stack. In tournament poker there's always a value to just having chips, because it means you have some stake in the prizes/pay-outs just by being there. When you have a big stack, you have less risk of busting and losing all that value, so aggressive/risky play allows you to accumulate more and be dominant. OTOH, when you're a "short stack" you're in danger of losing all that value, so it's necessary to play conservative just to stay in the tournament at all. Ideally, you want to look for spots where you're highly probable to win in order to climb back up. Economics seem similar in that there's a certain value of just maintaining a certain level. Usually that level is just being able to live relatively comfortable, being able to afford necessities and such. If you get below that, it can be devastating and very, very difficult just to get back to that level. Once you get past that it's much easier to take that excess and be riskier with it in order to accumulate more, because losses don't hurt as much. So there's probably a reason why this bias exists. Even if we think of it in terms of evolution, conservative/risk-aversion attitudes help keep us alive; riskier attitudes merely have a chance of making things better, or ending disastrously. In economics (and poker) it's easy to just look at a situation and assess the EV in numbers, but when you factor in the rest of "life," as it were, things are rarely so simple. Quite so. People often have very different goals than accumulating the most wealth. "Risky" adventures like becoming a policeman or fireman do not really pay well.
|
|
|
Post by Aj_June on May 14, 2020 3:54:14 GMT
Of course and that was my point regarding how behavioural economics/ behavioural finance rebuts traditional economics and traditional finance. Most of the economic theories even now are based on the idea that human beings are totally rational and by extension the markets are efficient and rational at macro level. The concept of "rational economic man" is the bedrock of classical economics which says that: # Humans have perfect information about market prices # Humans make completely rational decisions to maximise their utility # Humans know what they want and efficiently make their choices. # Humans only act in self interest. Of course that is wrong because almost all humans suffer from cognitive biases to some degree, do not have perfect information and do make inefficient decisions. I personally do no not challenge the 4th point I listed and do believe classical economics is right about that. But behavioural economics/finance counters all those points. And one of the ways through which assumptions of classical economics is challenged is by demonstrating that humans suffer from biases which lead to them to making decisions which a perfectly rational person who acts on Bayes won't make. I do not remember there being any mention of "behavioral" economics in my economics classes. I do well remember though that we did study "perfect information." It is often assumed in simple modeling of markets and the assumption is sometimes far off. It was just one of quite many factors we studied that can throw the models off. We use models anyway as it can help understand the main factors. We did not assume that people maximize "utility." They probably do not. We did however assume they maximize what they believe they "want" or that they would make choices based on what they considered "best" for their personal tastes. Specifically they minimize "opportunity costs." Some people might wonder how they could possibly fail at minimizing opportunity costs, but there are those issues of perfect information and several other snags and obstacles. Interesting post even if you deviate from standard traditional finance literature. Traditional economics and traditional finance is based on the assumptions that individuals are: risk-averse, self-interested utility maximizers have perfect knowledge are perfectly rational analyse economic decisions consistent with Bayes' formula The reason why traditional finance makes these assumptions is that these simplifications were required for developing theories such as utility theory, CAPM, arbitrage principles etc etc. This actually is more of what behavioural finance says. Behavioural finance says that individuals do not maximise their utility but instead "satisfice". So even if you didn't know about behavioural finance and its associated terminologies you got close to it may be through your intelligence or experience or knowledge. What is satisfice? A cool looking term? Actually imo it is a very well thought out term. It consists of two words 'satisfy' and 'suffice'. Individuals actually do not optimise their utility but satisfice. Satisficing is finding an acceptable solution in contrast to optimizing, which is finding the best (optimal) solution. So why do we satisfice and not optimise as traditional finance assumes? Because optimising incurs a lot of cost and takes a lot of time. Also, normal people do not use Bayes' to actually update their probabilities and make their decisions based on that. Now, some people do not like Bayes'. But the thing is Bayes' is not wrong. What is wrong is the expectation that people use Bayes' when they actually don't unless they are following a model or are in certain niche job/self-employment. So how does behavioural finance attack the rational economic men (REM) of traditional finance?It does so by attacking the basic assumptions of perfect information, perfect rationality, and perfect self-interest. 1) We know that we do not have perfect information. Let's say I have 10,000$ in my house and I want to get the highest return within acceptable level of risk. Of course I do not have perfect information regarding financial instruments I can invest in. Bonds? Equities? Structured Finance? Real Estate? Commodities? Many people might deposit their money in bank and accept the rate of return given by bank. 2) We know that we are not perfectly rational - Our decisions are subject to limitations of knowledge and cognitive capacity. Even if we have close to perfect information, we may not have the capability to process the information to come to right decision. 3) perfect self-interest - Now this is an area where I do not agree with those who explain human behaviour through behavioural psychology/behavioural finance/behavioural economics. The people in those behavioural fields point out that there are altruistic people who can take decisions based on benefiting others even at cost of harming themselves and so people are not always perfectly self-interested. I oppose that view. I believe if someone does self harm at cost of benefiting others then that person is still being perfectly self interested because that person gets a great utility by her act of benefiting others. So, no, I do not buy that people are not perfectly self-interested. In my view people are perfectly self-interested. But in any case, behavioural finance overall does a far better job of explaining human conduct than does traditional finance. But I should point out that traditional finance is normative so it is not even meant to describe human behaviour. It recommends how things should actually be done rather than how they are done. But the main point is that traditional finance has a lot of holes in its simplified assumptions. Same goes for models that are built on traditional finance - the capital asset pricing theory of Sharpe, option pricing model of Black, Scholes and Merton, portfolio principles of Markowitz etc etc. All of them are useful but not perfect and could lead to false conclusions quite often.
|
|
|
Post by Arlon10 on May 14, 2020 11:10:32 GMT
I do not remember there being any mention of "behavioral" economics in my economics classes. I do well remember though that we did study "perfect information." It is often assumed in simple modeling of markets and the assumption is sometimes far off. It was just one of quite many factors we studied that can throw the models off. We use models anyway as it can help understand the main factors. We did not assume that people maximize "utility." They probably do not. We did however assume they maximize what they believe they "want" or that they would make choices based on what they considered "best" for their personal tastes. Specifically they minimize "opportunity costs." Some people might wonder how they could possibly fail at minimizing opportunity costs, but there are those issues of perfect information and several other snags and obstacles. Interesting post even if you deviate from standard traditional finance literature. Traditional economics and traditional finance is based on the assumptions that individuals are: risk-averse, self-interested utility maximizers have perfect knowledge are perfectly rational analyse economic decisions consistent with Bayes' formula The reason why traditional finance makes these assumptions is that these simplifications were required for developing theories such as utility theory, CAPM, arbitrage principles etc etc. This actually is more of what behavioural finance says. Behavioural finance says that individuals do not maximise their utility but instead "satisfice". So even if you didn't know about behavioural finance and its associated terminologies you got close to it may be through your intelligence or experience or knowledge. What is satisfice? A cool looking term? Actually imo it is a very well thought out term. It consists of two words 'satisfy' and 'suffice'. Individuals actually do not optimise their utility but satisfice. Satisficing is finding an acceptable solution in contrast to optimizing, which is finding the best (optimal) solution. So why do we satisfice and not optimise as traditional finance assumes? Because optimising incurs a lot of cost and takes a lot of time. Also, normal people do not use Bayes' to actually update their probabilities and make their decisions based on that. Now, some people do not like Bayes'. But the thing is Bayes' is not wrong. What is wrong is the expectation that people use Bayes' when they actually don't unless they are following a model or are in certain niche job/self-employment. So how does behavioural finance attack the rational economic men (REM) of traditional finance?It does so by attacking the basic assumptions of perfect information, perfect rationality, and perfect self-interest. 1) We know that we do not have perfect information. Let's say I have 10,000$ in my house and I want to get the highest return within acceptable level of risk. Of course I do not have perfect information regarding financial instruments I can invest in. Bonds? Equities? Structured Finance? Real Estate? Commodities? Many people might deposit their money in bank and accept the rate of return given by bank. 2) We know that we are not perfectly rational - Our decisions are subject to limitations of knowledge and cognitive capacity. Even if we have close to perfect information, we may not have the capability to process the information to come to right decision. 3) perfect self-interest - Now this is an area where I do not agree with those who explain human behaviour through behavioural psychology/behavioural finance/behavioural economics. The people in those behavioural fields point out that there are altruistic people who can take decisions based on benefiting others even at cost of harming themselves and so people are not always perfectly self-interested. I oppose that view. I believe if someone does self harm at cost of benefiting others then that person is still being perfectly self interested because that person gets a great utility by her act of benefiting others. So, no, I do not buy that people are not perfectly self-interested. In my view people are perfectly self-interested. But in any case, behavioural finance overall does a far better job of explaining human conduct than does traditional finance. But I should point out that traditional finance is normative so it is not even meant to describe human behaviour. It recommends how things should actually be done rather than how they are done. But the main point is that traditional finance has a lot of holes in its simplified assumptions. Same goes for models that are built on traditional finance - the capital asset pricing theory of Sharpe, option pricing model of Black, Scholes and Merton, portfolio principles of Markowitz etc etc. All of them are useful but not perfect and could lead to false conclusions quite often. I agree that altruistic spending is as much in the "personal interest" of the spender as any spending considered self serving by others. That goes for utility as well. People make choices based on what is most useful from their personal perspective, which might not be considered as "useful" by other parties. While they do not maximize "utility" as defined by various schools of thought, they are in reality maximizing utility as they uniquely define it. The terminology used by my economics instructors avoids defining what utility is since that is arbitrary. I suppose it is intuitive that there can only be one "truth." Science ceteris paribus searches for the one "right" answer to various questions. It is obvious that there is an over dependence on math and science lately to find the one "right" answer for human choices, which is ridiculous, and a bad application of science. The variety of possibilities is beyond science to manage. There are of course the bounds of morality, but within those the choices are virtually limitless. There is a tempting efficiency in everyone making the same choices. Suppose every one wanted their own unique design on their shirts. That would be very inefficient. At the other extreme suppose everyone had to wear the same color shirt. That might save time, labor and resources, but at the cost of individual personality and expression. The most glaring problem with limited choices is that they are made by people with average intelligence and as some people note no "perfect information." I find the choices of average people usually ill informed. That's why I am not a communist or socialist. On the other hand people with obvious mental deficiencies might prefer the choices made by the herd, since it might work out better than their own. They assume it is "science" although applying science that way is ridiculous. There is no "one true" design of shirts.
|
|
|
Post by Aj_June on May 14, 2020 23:06:20 GMT
I agree that altruistic spending is as much in the "personal interest" of the spender as any spending considered self serving by others. That goes for utility as well. People make choices based on what is most useful from their personal perspective, which might not be considered as "useful" by other parties. While they do not maximize "utility" as defined by various schools of thought, they are in reality maximizing utility as they uniquely define it. The terminology used by my economics instructors avoids defining what utility is since that is arbitrary. I suppose it is intuitive that there can only be one "truth." Science ceteris paribus searches for the one "right" answer to various questions. It is obvious that there is an over dependence on math and science lately to find the one "right" answer for human choices, which is ridiculous, and a bad application of science. The variety of possibilities is beyond science to manage. There are of course the bounds of morality, but within those the choices are virtually limitless. There is a tempting efficiency in everyone making the same choices. Suppose every one wanted their own unique design on their shirts. That would be very inefficient. At the other extreme suppose everyone had to wear the same color shirt. That might save time, labor and resources, but at the cost of individual personality and expression. The most glaring problem with limited choices is that they are made by people with average intelligence and as some people note no "perfect information." I find the choices of average people usually ill informed. That's why I am not a communist or socialist. On the other hand people with obvious mental deficiencies might prefer the choices made by the herd, since it might work out better than their own. They assume it is "science" although applying science that way is ridiculous. There is no "one true" design of shirts. I can't say about science but yes the early economists created models in which they unnecessarily complicated the theories with calculations that are not even tangible. So how do you mathematically measure how much satisfaction I get from drinking 1 bottle of coke, from the second bottle and what is marginal satisfaction from consuming each successive bottle? IMO, right from the start, economists should have taken a more normal stance than believing that people make perfect decisions and maximise their utility. So there was a bit of over-dependence on maths. But it is not something new in the field of economics. I hope we can create more realistic models in the future than unrealistic ones.
|
|