### Mathematics

As I did at the end of 2013 & 2014, so I do again here at the end of 2015 to recount some travel experiences, which I don’t normally write about here. I need not give the whole setup again for the premise of such entries and see my blog from the end of 2013: Travels of Spocklogic. The notables this year (travel blogs I finished or made additions to) include:

That’s the summary for 2015. Some are carry overs from 2014, but I finished the blogs in 2015, after my last post on travels (see: Travels of Spocklogic II) in December 2014 or earlier if I made additions. As I alluded to in recent entries, I will take a break from this Cogito Ergo blog for a while in 2016. I’ve had 20 years of internet exposure and been blogging for 10 years (see: 20 Years of Internet and Mapping the Internet). I hope to return again with a fresh perspective down the line. There’s plenty to explore in the Cogito Ergo blog archives until then (see the link to: Browse Blog Posts). Best wishes for the New Year 2016! See you in the future…

The news always seems full of surveys/polls about this or that, trying to predict trends or outcomes and explain society. Nowhere is polling more prevalent than in the political arenas. One popular place to go for polling data is Rasmussen Reports, which says of itself, “If it’s in the News, it’s in our polls.”  They do many surveys too, but this is just a poll of another kind. Here are some seasonal examples I looked up on their website (as of 11/27/2015):

1.) Nearly 3-out-of-4 American Adults (72%) think stores start the Christmas season too early.
2.) 43% of American Adults say they have started their gift shopping. 54%have not.

About these polls it is told that 1,000 American adults were surveyed and that “The margin of sampling error is +/- 3 percentage points with a 95% level of confidence.” Hmmm, what does that mean? To understand this, some definitions are in order, specifically ‘margin of error’ and ‘level of confidence’.

Margin of Error (MoE) – Measure of the accuracy of the results, which indicates the difference between an estimate of something and its true value.
Level of Confidence (LoC) – Measure of the reliability of a result, which tells how confident we are in the margin or error.

Polls and surveys work by asking a random sample of the total population a series of questions. Obviously they can not ask the total population (perhaps hundreds of millions), so they sample in a random way (it’s cheaper and quicker) and use that data to state something relevant. The numbers themselves can be thrown around, but how accurate are they? That’s where MoE and LoC come into play. It’s important to remember that the MoE and LoC depend on the sample size, not the total population size, if that total population size is large. For a 95% LoC, the MoE turns out to be 0.98/√n, where n=1000 (the sample size). Do the math and it is 0.98/√1000 = 0.03 (or +/- 3%). In simple terms, this means that the survey/poll is 95% confident that the error between the sampled population and the total population is +/- 3%. Said another way, if you keep polling in the same way, then 95% of the time the answer you get will be within 3% of the correct answer. The mathematics reveals that (contrary to popular belief) the relative sample size matters less than the absolute sample size. That is, the results are independent of the total population, no matter how big it is, and it is just the sample size itself of that population that matters. How is it possible that a sample size as small as 1000 out of a total population in the millions or hundreds of millions has an MoE as small as +/- 3%? Welcome to the nature of the so-called ‘Bell Curve’. It’s also called the ‘normal distribution’ and is is a tool statisticians use to tell how far the sample is likely to be off from the overall population, that is, how big a MoE there is likely to be in a survey/poll.

Under the most ideal conditions, the above is generally true, but a more realistic condition is that an LoC of at least 95% requires that LoC >[1 – 1/(4n*MoE^2)], which for n = 1000 gives MoE ~ 0.07 (or 7%). This turns out to be a more realistic number for mathematical reasons relating to the sampling itself and randomness (see Small samples, and the margin of error). Further, even this is somewhat idealized in scenario and questions can come up as to nature of population sampled, questions refused, undecided, understood, truthful and other intangibles which can play a role. Survey and polls can be widely off depending on the nature of the questions and how they are answered or not answered. Treat them all with skepticism, but bear in mind they CAN be accurate even with a sample size as small as 1000. This seems to be the magic number (n=1000) most survey/poll people use to get the 95% LoC with 3-7% MoE, and usually the ideal case of 3% MoE.

The truth of political polling is that if 3% MoE is acceptable 95% of the time, then that is what they go with. People who poll and survey seem to have settled on this and the sample size is usually 1000 people. It sounds unbelievable, but it’s true from a mathematical perspective. In all human endeavors there are always intangibles to be considered (some of which I’ve mentioned) and these can make survey/polls quite unreliable. In addition they can quickly become irrelevant soon after they are taken when events or circumstances change. My best advice it to treat them as you might the daily Horoscope, realizing they encompass a multitude of possibilities, but the reality is in the outcome itself. The mathematics does not lie and can be a predictor of trends and outcomes, even with a small population. The greatest variable is not the behavior of human beings, which can reasonably be predicted under certain conditions, but the human beings themselves, who are both the predictor and predicted simultaneously. We tend to change with the wind. I think of it as weather, which changes from day to day, week to week, month to month, but climate itself is the long term average of weather, which can be predicted. Polls/surveys are like the weather and change daily, weekly, monthly like weather, but long term maybe can be averaged to predict human behavior. This is somewhat the basis of Isaac Asimov’s Foundation Series where the science of psychohistory can predict the track of humanity into the far future, but the random element always plays a role, which can throw predictions off.

Remember always, mathematics doesn’t lie, but people do, though not always intentionally. We live in a very partisan and biased culture where so-called ‘news’ media conduct their own polls, present the results without even understanding the mathematics of what it means. These media personalities of today are mostly sensationalist and/or just want to promote their conservative and/or liberal cause, what ever those nomenclatures mean anymore. I still remember the words of Dr. Fitz, as we called him, my Advanced Civics teacher in high school back in the late 1970’s who told us to read, listen and watch, then read between the lines. That advise has stuck with me my whole life and never has it been a more valuable lesson than in our culture today.

Note: In general, for Margin of Error (MoE) at various Levels of Confidence (LoC), use these formulas, where n=sample size:

MoE at 99% LoC ~ 1.29/√n
MoE at 95% LoC ~ 0.98/√n
MoE at 90% LoC ~ 0.82/√n

If the sample fraction is > 5% of the total population, then also multiply the results by the factor √[(N – n)/(N – 1)], where n = sample population, N = total population. This is the ‘finite population correction’. Usually the N >> n, so this correction is negligible.

There are also Margin of Error calculators you can use, such as:

http://www.americanresearchgroup.com/moe.html

Statistics and mathematics aside, it’s really the quality of the questions, how they are asked and responded to that matter more perhaps. That is, how sound was the methodology of a survey or poll, and was there any ‘built-in’ (intentional or unintentional) bias? Statistics alone can not answer that, as it’s a more subjective question. Non-sampling errors can always creep in, even in the best designed survey/poll. These include true randomness, poorly designed questions, poor interviewers, and a host of other factors. These non-sampling errors can, in fact, often exceed the sampling errors themselves. It’s always best to treat surveys/polls with some skepticism and the statistics behind them are not always just an indicator of their reliability.

The controversy surrounding the Patriots deflated footballs in the AFC Championship game on Jan. 18, 2015 has made some headlines in the news, and raised some physics questions. There has been talk of the ideal gas law, guage pressure vs. absolute pressure and relative temperature vs. absolute temperature. So, I take this as an opportunity to explain some of what all this means if you have followed it or have an interest in this. Lets begin with the NFL rule book stating:

The ball shall be made up of an inflated (12 1/2 to 13 1/2 pounds) urethane bladder enclosed in a pebble grained, leather case (natural tan color) without corrugations of any kind. It shall have the form of a prolate spheroid and the size and weight shall be: long axis, 11 to 11 1/4 inches; long circumference, 28 to 28 1/2 inches; short circumference, 21 to 21 1/4 inches; weight, 14 to 15 ounces.

By “pounds” it is assumed this means “pounds per square inch” or psi. However, this term refers to a guage pressure, so the actual pressure is really guage pressure + atmospheric pressure. To understand this remember that the pressure measured with a guage, such as you would with a tire, indicates pressure relative to the atmospheric pressure. Scientists often speak of atmospheres of pressure or atm and define Standard Temperature and Pressure for sea level at 273.15 K as 1 atm = 101325 Pa, or 14.69595 psi. (Note: Pa is another pressure unit called the Pascal and K is a temperature unit called Kelvin). So, consider 1 atm of pressure inside some container, that is, there is the same pressure inside the container as outside of it. Taking a gauge will not show 1 atm, but zero atm, since the pressure in the container is just the same as the pressure outside. This means that relative to the outside there is pressure in the container and it is in balance or equilibrium. This is the meaning of gauge pressure. Now, absolute pressure is technically more accurate when speaking of pressure as it is the force that some gas is applying to the container surface area, by virtue of the fact that on the order of 1E23 molecules are bouncing around off each other and the container wall because of thermal stimulation (heat), a form of kinetic energy.

Heat is not the same thing as temperature, though they are related to each other. Heat is a form of energy that flows from a hotter substance to a colder one, which have higher and lower temperatures, respectively. So, there must be a temperature difference for heat to flow. Consider our container again, and it has a certain amount of heat associated with it, which can be probed by measuring the temperature. Another container made of a different substance may have more heat associated with it but still measure the same temperature, because the second container has more mass. Anyway, that’s the concept, and wanted to get that out of the was to talk about temperature specifically. Temperature scales we commonly use everyday to speak about the weather are measurements relative to some reference value. In the Celsius scale, for example, the reference value is the freezing point of water, or 0 °C. Fahrenheit uses 32 °F as the freezing point for peculiar scientific historical reasons. All measurements are made relative to these reference values, for example in speaking of above or below freezing. The Celsius scale and the Fahrenheit scales are relative temperature scales and can have both positive and negative numbers. This is not so with an Absolute temperature scale, which has only have positive numbers. The Kelvin scale and the Rankine scale are absolute temperature scales. The Rankine scale, in which the degree intervals are equal to those of the Fahrenheit scale and in which Rankine (R) equals −459.7° Fahrenheit. The Kelvin scale, in which the degree intervals are equal to those of the Celsius scale and in which absolute zero is 0 degrees Kelvin and the triple point of water has the value of approximately 273 degrees Kelvin (K). The triple point is the temperature and pressure where the three phases of a substance (solid, liquid and gas) are in equilibrium. The triple point of water, 273.16 K at a pressure of 611.2 Pa, is chosen basis of Kelvin definition.

I explained all this (more than you the reader maybe wanted to know) because it is important to understand the difference between relative measurements and absolute measurements. This is important in scientific discussions describing values of things like pressure and temperature. So, we now have a better feel for things like gauge pressure vs absolute pressure and relative temperature versus absolute temperature. This allows discussion of the Ideal Gas Law that has been in the news. A good description of an ‘ideal gas’ is as follows: “An ideal gas is defined as one in which all collisions between atoms or molecules are perfectly elastic and in which there are no intermolecular attractive forces. One can visualize it as a collection of perfectly hard spheres which collide but which otherwise do not interact with each other.” The Ideal Gas Law is characterized by three variables: absolute pressure (P), volume (V), and absolute temperature (T) and written as:

$PV = nRT$

where n = number of moles of the gas, with each mole (abbreviated as mol) containing 6.02214129 × 1023 atoms or molecules, known as Avogadro’s number. R is the gas constant (known as universal or ideal gas constant) universal  having a value of 8.314  J/K-mol or 10.731  ft3 -psi -lb/R-mol. As will be seen, we won’t need these constants, but just note them for reference. The utility of this equation is that one can hold any variable constant and see how the others change. In the case we want to examine, our container is a football, and we can tentatively take the volume to be constant and see how the pressure changes with a known temperature change.

Before proceeding, let’s review the scenario that transpired in Foxborough, MA on the afternoon and early evening of Jan 18, 2015 before and during the AFC championship game between the New England Patriots and the Indianapolis Colts. The timeline of events can be found here – Timeline: Key Deflategate events probed in Wells Report. Key events needed for calculations are that footballs were checked at 3:45 pm and found to be at or above the minimum 12.5 psi gauge pressure, though they were not recorded. At 8:28 pm during halftime, the footballs are retested and found to be below psi specifications. The exact measurements of the under inflated footballs can be found here – Finally, the halftime PSI numbers are known. A couple of additional pieces of information are needed: (1) the temperature at which the footballs were and the atmospheric pressure in Foxborough at that time and (2) the field temperature and atmospheric pressure when the footballs were taken off the field. Data is available about the weather conditions and can be found here – Foxborough Weather Conditions (Jan. 18, 2015). This is actually data from Norwood, MA about 20 miles away from Foxborough. Only a guess can be made of the locker room temperature (say 70° F or 294.26 K) before the game, but the atmospheric pressure was approximately 29.9 Hg (or 14.686 psi) from the weather data. During halftime the temperature was approximately 50° F (or 283.15 K) and the atmospheric pressure at 29.6 Hg (or 14.538 psi). We have the needed data now for some calculations on all the footballs, but let’s play with the ideal gas law, and assume a constant volume for the football at time 1 (3:45 pm) and time 2 (8:35 pm), that is V1 = V2 and

$P{_1}V = nRT{_1}$

$P{_2}V = nRT{_2}$

Combining these equations, V, n and R cancel out and we are left with:

$P{_2} = {{T_2}\over{T_1}}P{_1}$

I ran the numbers, correcting gauge pressure to absolute pressure from the weather data and using the Kelvin scale temperatures, for both alternate referee measurements (Piroleau and Blakeman). Initial minimum pressure was assumed to be 12.5 psi and 13.0 psi for the Patriots and Colts footballs, respectibvely, at T =294.26 K and atmospheric pressure of 14.686 psi (with absolute pressure being 27.186 psi), while final temperature was 283.15 K with an atmospheric pressure of 14.538 psi. The results for the final absolute pressure on the 11 Patriots footballs and 4 colts footballs are:

Patriots Footballs, based on initial pressure of 12.5 psi

Colts Footballs, based on initial pressure of 13.0 psi

Taking the average of the two alternate referees measurements, gas law results and difference between the them, results in the following:

Patriots: Initial pressure = 27.186, average halftime measured pressure = 25.836, gas theory pressure = 26.10, average ∆P = -0.324
Colts: Initial pressure = 27.686, average halftime measured pressure = 27.069, gas theory pressure = 26.64, average ∆P = 0.429

This means that the actual alternate referee measurements and the gas law agree to within 0.324 psi with the actual measurement for the Patriots footballs, being slightly lower than what the gas law would predict, while they agree to within 0.429 psi for the Colts footballs, being slightly higher than what the gas law would predict . Said in another way this is only about 0.32 to 0.43 in 27, or approximately 1.2 to 1.5% difference in measurement and theory. The total average pressure drop, gas law aside, for Patriots footballs is 1.35 psi and for Colts footballs is 0.617. Both teams show an average drop in pressure, so something happened to both teams footballs that caused them to measure lower pressure. With that said, it is also curious that Prioleau’s measurements are consistently higher than Blakeman’s measurements for Patriots footballs, while Prioleau’s measurements are consistently lower than Blakeman’s measurements for Colts footballs. I don’t understand this unless they switched gauges between measurement of Patriots and Colts footballs. This appears to be the case. There are a lot of unknowns here: Initial pressures of the footballs before the game were never recorded, the initial temperature in each locker room is not known, the time between when the footballs were taken off the field and when they were measured is not precisely known, and the football pressures at games end were presumably not measured. In addition, only 4 Colts footballs were measured because referees ran out of time according to the Wells Report, implying the Colts footballs were measured after the Patriots footballs, which may have given them more time to warm up. What was the time gap from officials going from the Patriots to Colts locker room? All we really know is that halftime was around 13.5 minutes, where measurements and reinflation took place.

Based on this analysis the conclusion would be (from a scientific point of view) that the footballs were not tampered with and pressure differences are partly explained by the Ideal Gas Law. Hooray for physics! The footballs were re-inflated at halftime, but it doesn’t see that anybody bothered to measure them again at the end of the game. Nevertheless, the Wells Report seems to reject the Patriots explanation using physics. The scientific analysis of “Exponent”, the consulting firm used in the Wells report, seems thorough. However, the Wells Report may have cherry picked what they wanted from the scientific report by Exponent to phrase what they wanted to say. A key statement in the Wells Report is: “Exponent concluded that, within the range of likely game conditions and circumstances studied, they could identify no set of credible environmental or physical factors that completely accounts for the Patriots halftime measurements or for the additional loss in air pressure exhibited by the Patriots game balls, as compared to the loss in air.” True, but both teams footballs lost pressure when measured at halftime, and the Patriots footballs measured 0.733 psi lower in lost pressure than the Colts footballs, according to my analysis. The Wells Report makes this to be ~ 0.7 psi. Interestingly, the difference in pressure of the footballs explained by the Ideal Gas Law -0.324 for the patriots and 0.429 for the Colts, an absolute difference of 0.753 psi. The point is that both teams have a pressure discrepancy that has to be explained by something. Instead, the Wells Report states, “According to our scientific consultants, however, the reduction in pressure of the Patriots game balls cannot be explained completely by basic scientific principles, such as the Ideal Gas Law, based on the circumstances and conditions likely to have been present on the day of the AFC Championship Game.” So, what about the reduction in pressure of the Colts footballs? What is that explained by? This is not thorough, unbiased science as presented in the Wells report. Somebody should scrutinize the Wells Report more, as it’s full of assumptions and goes so far as to say science does not explain the Patriots footballs pressure drop. If true then by the same token, what has caused the pressure drop in Colts footballs, and science must not be able to explain that either?

Well, the NFL punishment has been doled out and it seems more about Patriots lack of cooperation in the investigation now, or more specifically, Tom Brady’s participation. I can’t say I blame him for not cooperating in today’s hypersensitive society where every little detail is scrutinized and people are presumed guilty until proven innocent. Maybe the Patriots real flaw is a culture of trying to gain a competitive edge without actually breaking any rules. It’s a grey line on morality, but part of sports, past and present. Some of the ways players try to get an edge up seem based more on psychology or physiology than physics. Athletes will be athletes and rely on brawn more than brains most of the time. Coaches or managers on the other hand often know more than they admit to, but it’s like protecting the commander in chief and players do that as they should. Maybe the Patriots did tamper with the footballs, but maybe they didn’t – it’s a stretch at best (without complete data) to conclude they did. To single out Tom Brady just seems unfair to me. He seems an honest guy and others say that of him too. His character is at stake here and I hope he come out on top in challenging the Wells Report!

Enough said, and here are some facts about footballs:

The NFL rule says it must be a prolate spheroid. It has a volume V = ${4\over 3}{\pi}{a^2}c$, where a is half the length of the long axis and b is half the length of the short axis, called the major and minor axes, respectively. Since 1959, the inch has been defined and internationally accepted as being equivalent to 25.4mm, so with 1″ = 2.54 cm,  an NFL regulation football is 27.94 to 28.575 cm along the long axis (giving b = 13.97 to 14.2875 cm) and 53.34 to 53.975 cm around the short circumference, which requires using the formula for circumference = $2{\pi}r$ (giving a = 8.489 to 8.59 cm). Using these numbers in the formula for an oblate spheroid to be V = 4216.95 to 4416.02 cm3 (or 257.33 to 269.48 square inches).

A football weight is 14 to 15 ounces. With 1 ounce = 28.349 gm, a football is 396.886 to 425.235 gm. It is generally assumed that the air in a fully inflated football accounts for only about 10 grams of its mass. Is this true? Assuming a gauge pressure of 13 psi (89632 Pa) at standard temperature and pressure (T = 273.15 K and P = 101325 Pa = 14.69595 psi) gives the absolute pressure of the football to be and knowing the volume of a football in addition to the molecular weight of O = 15.9994 and  molecular weight of N = 14.0067, we can figure it out. By weight, dry air contains 23.2% O2 and 75.47% N2 by weight, which accounts for 98.67% of the weight of air. The actual major constituents of air are shown below:

Gas Ratio compared to Dry Air (%) Molecular Mass
M –
(g/mol)
Chemical Symbol Boiling Point
By volume By weight (K) (oC)
Oxygen 20.95 23.20 32.00 O2 90.2 -182.95
Nitrogen 78.09 75.47 28.02 N2 77.4 -195.79
Carbon Dioxide 0.03 0.046 44.01 CO2 194.7 -78.5
Hydrogen 0.00005 ~ 0 2.02 H2 20.3 -252.87
Argon 0.933 1.28 39.94 Ar 84.2 -186
Neon 0.0018 0.0012 20.18 Ne 27.2 -246
Helium 0.0005 0.00007 4.00 He 4.2 -269
Krypton 0.0001 0.0003 83.8 Kr 119.8 -153.4
Xenon 9 10-6 0.00004 131.29 Xe 165.1 -108.1

So, we can be a little more exact and include carbon dioxide (CO2) and argon (Ar) to account for 99.99% of air composition by weight. Adding the numbers scaled by weight fraction gives:

Molecular weight of air = (0.7547 x 28.02) + (.2320 x 32) + (.0128 x 39.94) + (.00046 x 44.01) = 29.1 g/mol

The volume of a football we know is 4216.95 to 4416.02 cm3 , so take V = 4316.5 cm3 as the average. Using the Ideal Gas Law we can calculate the number of moles of air in a football at standard temperature and pressure as: n = PV/RT = [(190957 Pa) x (0.0043165 m3)]/[(8.314  J/K-mol) x (273.15 K)] = 0.362 moles. Each mole has Avogadro’s number of 6.02214129 × 1023 molecules. With the molecular weight of air as 29.1 g/mol, we find that a football has and incredible 2.18 × 1023 air molecules (more that 2 billion billion) that add to approximately 10.5 gm. So, indeed it is true, and air only accounts for ~ 2.5% of its mass. We may conclude from this that the NFL source who reportedly told Kravitz (Bob Kravitz at WTHR in Indiana) that “officials took a ball out of play at one point and weighed it”, and would investigate the deflation of footballs by the Patriot’s, is sort of half-baked. Kravitz broke the story and posed it as possible cheating. That’s how these things start and take on a life of their own. Anyway, that’s all I have to say or share on this matter, and a wild one it is for sure. Here are some other links I can offer:

1.) Ideal Gas Law – Hyperphysics
2.) Gas law calculators – WebQC
3.) Humid Air and the Ideal Gas Law – The Engineering Toolbox
4.) Heat and Heat Vs. Temperature – Online Physics Tutorials
5.) Calculate mass of air in a tyre from pressure – Physics StackExchange

As I did at the end of 2013, so I do again here at the end of 2014 to recount some travel experiences, which I don’t normally write about here. I need not give the whole setup again for the premise of such entries and see my blog from the end of 2013: Travels of Spocklogic. The notables this year are a couple of blogs I finished and some reviews that may be of interest:

.
.

..
.

That summarizes some travel selections for 2014. I did travel to Italy also in July 2014, and have some links to share for photo collections I put together for a special year in Erice to celebrate a 40th anniversary of the International School of Atomic and Molecular Spectroscopy (ISAMS):

Rino: 40 Year Erice Celebrations (2014) – Erice, Italy
2014 Erice Workshop: 30 July – August 5 – Erice, Italy
People (2014) – Erice, Italy
Places (2014)
– Erice, Italy

In addition, I traveled to China again this year in November 2014, but am still working on my travel blog for that, so it will have to wait until my 2015 account of my travels. I will make this type of entry something traditional at years end to cover where I have been and what I have done in travel ways. It’s all rather like the City on the Edge of Forever perhaps…

I don’t often write about my travels in this WordPress blog (Cogito Ergo) as I have another site for that (TravBuddy). In this year of 2013, I completed a number of travel blogs on that site that are worth noting and I give the links to them here. Mind you, I don’t know that any of my travel blogs are ever really completed. Each one is like a child I nurture and raise up, but always needs attention in future ways. Anyway, I suppose I list them here for my own reference and also to offer it to others who may be interested in my travels. There is some connection of the blogs, one to another in embedded personal ways, but are also self-contained. Here they are:

.

.

.

.

.

.

Some of these blogs have been posted for some years, and I either added to them, made them more complete, and/or formed connections between them. Some of them are entirely new in 2013. They do tell a story in total I suppose and maybe that’s why I decided to make a sort of review of the Travels of Spocklogic here. They were also all the blogs featured on TravBuddy for me this year. My Italy blog (L’Avventura Dell Italia) seems never-ending and I have some more work to do on it, but the majority of important events are there for the most part. The last one in this list, the blog on China, is something I am still working on too, but intend (or hope) to complete it before the end of 2013. I suppose this collection of blogs forms a personal journey of sorts that I tried to form this year regarding my life and relation to travel. When I finish the China blog, maybe I will know what I have been endeavoring to understand and ultimately discover in my life. It’s not a teaser, or cliffhanger, but maybe more a matter of what I will embrace. Sounds enigmatic I suppose, but not really. It’s my personal perspective, the choices I make and what is ultimately best for me in a world of possibilities…

This may be a touchy issue, but I thought I would weigh in on the news that seemed to be somewhat ubiquitous regarding Angelina Jolie and her prophylactic mastectomy. She wrote an article in the NY Times about it entitled My Medical Choice. It’s a personal choice and Jamie Lee Curtis seemed to praise her brave steps and quiet dignity in a Huffington Post article entitled Freedom of Choice, Freedom of Privacy. I can respect that, Angelina’s choice and some opinion on her bravery and dignity. I do, however, worry about what kind of message this sends. Yes, it is good to have a choice for health and longevity, but it’s not just a matter of statistics (but I will touch on the statistics here too). Being ‘at risk’ is not a disease. Even genetic or hereditary indicators does not account for exceptional cases where a gene is present but causes no disease. The science on this is all so very new in the last decade or two and I worry that decisions are being made based on incomplete science and misinterpreted statistics. There are social psychology issues involved here too in such so-called risk reduction surgery. My big concern with a high profile story like this is that it starts a wave of actions without thinking fully and just following a celebrity, who is really just a person like me, you or anybody, making personal decisions based on their perspective and private reasons. That’s something to think about.

I’d like to discuss statistics now. Do you know the difference between a single event probability and  a conditional probability? Is there a difference between the chances of something happening versus the frequency of occurrence of that same something happening? If you don’t know the answers to these questions then you are not alone. The medical community uses statistics to inform their patients, but your doctor probably does not really understand the statistics. He or She is a physician, not a mathematician, right? I’ll trust my doctor any day to prescribe an antibiotic for me, but to compute my odds for survival given a serious disease – no way! The doctors get these statistics from consensus in the literature. I will take the doctors numbers and then go investigate them. Let’s take the case of Angelina Jolie. In her article she says (because she tested positive for the BRCA1 gene):

“My doctors estimated that I had an 87 percent risk of breast cancer and a 50 percent risk of ovarian cancer, although the risk is different in the case of each woman. Only a fraction of breast cancers result from an inherited gene mutation. Those with a defect in BRCA1 have a 65 percent risk of getting it, on average. Once I knew that this was my reality, I decided to be proactive and to minimize the risk as much I could. I made a decision to have a preventative double mastectomy. I started with the breasts, as my risk of breast cancer is higher than my risk of ovarian cancer, and the surgery is more complex.”

She says later in the article:

“I wanted to write this to tell other women that the decision to have a mastectomy was not easy. But it is one I am very happy that I made. My chances of developing breast cancer have dropped from 87 percent to under 5 percent. I can tell my children that they don’t need to fear they will lose me to breast cancer.”

Was the choice Angelina Jolie made the correct one? Personal feelings aside (and that’s a self analyzing choice), it is hard to say and depends how you look at the statistics. Lets look at the absolute and relative probability. The absolute probability reduction says from 5 to 1 in 100, which means a risk reduction of 4 in 100, or 4% reduction in risk. On the other hand, the relative probability says 4 saved out of 5, or 80% reduction in risk. That is, the relative risk reduction is the absolute risk reduction (4/100) divided by the patients who die without treatment (5/100). Do the math (4/5=0.80). Another way of saying all this is the Number Needed to Treat (NNT). The number of women who undergo prophylactic mastectomy to save one life is 25 because 4 in 100 (1 in 25) is prevented by such surgery.

What can we really say about the statistical numbers presented by Angelina Jolie? She was speaking from a relative probability perspective (I think), going from 87% to 5% and reducing her chances of cancer by 82%. In terms of absolute probability it’s still only 4-5% risk reduction at best. The number needed to treat is important because it builds the population – 1 life saved in 25. What does it mean? It means that the life of one woman was saved, but the other 24 had no benefit from the mastectomy. Most high risk women don’t die of breast cancer, even though they keep their breasts, and few die of breast cancer either after having their breasts removed. I have considered the high risk category for discussion here like Angelina announced as a point for discussion, not judgement. I’d like to take the opportunity to extend good wishes for Angelina, Brad, their children & extended family during this time. Take care of each other!

My personal advice – Make YOUR own choice and know the numbers to make an informed one on Risk Reduction. Don’t just know the numbers, but know what they mean too! Remember there is absolute risk reduction, relative risk reduction and numbers needed to treat. We also have the single event and conditional probabilities too. It’s a head full to be sure, but not such an egghead thing when your life is on the line and body parts are involved. Kind of a serious post from me with a message. It’s not personal, it’s just something I wanted to say in a logical way, but I can’t help thinking (after writing this) that it has affected me in an emotional way too.

Photo I took at the Grand Bazaar in Istanbul, Turkey

Since today is International Pi Day, I thought a bit of mathematics would be in order to honor the occasion. Nothing to do with Pi the constant really, but perhaps some knowledge to help you get a ‘piece of the pie’ so to speak! A good question in life is how to build wealth. For most of us this amounts to saving and good investment, but how does it work out mathematically and what are the important variables involved? Let’s find out… Don’t worry if you are not mathematically inclined and I will try to keep it as simple as possible, but the important point is to understand the variables, and the concepts they represent for you the investor. If you wish to skip over the mathematics, you can just refer to the boxed equations, which represent the core results.

Starting with a simple application will help to define some variables and introduce the concepts. We begin by asking a simple question: If you invest a given amount of money (P) called the principal, what is the total return amount (T1), that you will have at the end of 1 year, given a yearly rate of return (i) called the interest rate. The answer is simple:

$T_{1} = P(1+i)$

Now, we ask the question, what about the total return amount, T2, after 2 years? Well, after the first year, T1 becomes the principal for the 2nd year, that is:

$T_{2}=T_{1}(1+i)\\ {\quad}=P(1+i)(1+i)\\ {\quad}=P(1+i)^{2}$

And for the 3rd year:

$T_{3}=T_{2}(1+i)\\ {\quad}=P(1+i)^{2}(1+i)\\ {\quad}=P(1+i)^{3}$

and so on… The general answer for N years is:

$\boxed{T=P(1+i)^{N}}$ Eq. I

This equation embodies the concept of compound interest and shows the total growth (T) with an initial investment (P), assuming an average interest rate (i) over a number of years (N).

Let’s try something a little more advanced now and ask the more complicated question regarding the growth of your money if, instead of just investing one lump sum, you invest a certain amount per year. So, suppose you invest (P) in the first year, then your total after that year is simply T1=P(1+i), as was initially stated at the beginning. In the second year you again invest (P) on top of T1, so what is the total, T2 after the second year? It is:

$T_{2}=(T_{1}+P)(1+i)\\ {\quad}=[P(1+i)+P](1+i)\\ {\quad}=P(1+i)[1+(1+i)]$

And for the 3rd year:

$T_{3}=(T_{2}+P)(1+i)\\ {\quad}=[P(1+i)[1+(1+i)]+P](1+i)\\ {\quad}=P(1+i)[1+(1+i)+(1+i)^{2}]$

The trend is clear and we have what is called a geometric progression. Writing the terms in brackets as a geometric series:

$1+(1+i)+(1+i)^{2}+...=\displaystyle\sum_{x=0}^{N-1}\left(1+i\right)^{x}$

where N is the number of years. It gets a little complicated here, so bear with me. We can rewrite the summation as:

$\displaystyle\sum_{x=0}^{N-1}\left(1+i\right)^{x}=\displaystyle\sum_{x=0}^{N}\left(1+i\right)^{x}-\left(1+i\right)^{N}$

At this point I can use a trick with summations of the form $S_{n}=\displaystyle\sum_{n=0}^{N}a^{n}$

$aS_{n}=\displaystyle\sum_{n=0}^{N}a^{n+1}$

$aS_{n}-S_{n}=\displaystyle\sum_{n=0}^{N}a^{n+1}-\displaystyle\sum_{n=0}^{N}a^{n}=a^{N+1}-1$

We can now write:

$\displaystyle\sum_{x=0}^{N-1}\left(1+i\right)^{x}={{(1+i)^{N+1}-1}\over{(1+i)-1}}-(1+i)^{N}={{(1+i)^{N}-1}\over{i}}$

So, in general, we have:

$\boxed{T=P(1+i){{(1+i)^{N}-1}\over{i}}}$ Eq. II

Illustrating again the power of compound interest and shows the total growth (T) with a yearly investment (P), assuming an average interest rate (i) over a number of years (N).

As a check, Eq. II should reduce to Eq. I when N = 1. In this case [(1+i)-1]/i = 1 and indeed it does! So, what does all this mean anyway?

Eq. I – Suppose you invest $10,000, what can you expect after N years assuming an average interest rate of 10 percent? . For the last 25 years the stock market return has been around 10% per year (or i = 0.1). Even historically since 1929 (after the Crash) the return is about the same during that time. All this despite lower returns the last 5 to 10 years. For the last 10 years the returns have been around 4.5% and for the last 5 years about 2.3%. This data is as of 2012. Eq. II – Suppose you invest$1000 per year, what can you expect after N years assuming an average interest rate of 10 percent?

I chose these examples as they are essentially equivalent methods of investing money to make a million. You can start with 10K and let it ride from the start or you can invest 1K per year over the same time span. There are other combinations of investment strategy and you must choose the one that suits you best. I have provided the framework in calculation ways, but you must decide what works for you. If you are young then time is on your side, but if you are older, you will have to play catch up, so to speak. Play around with the numbers in the equations I provided. There are calculators online that do such things too. Here is one from Dave Ramsey you might find useful. His calculator does a monthly compounding for an average yearly interest rate, which comes out a bit more than the equations I derived since I only consider the yearly average compounding. Have fun and play around with it now that you may understand the principles a little better, which was the primary objective of this blog in quantitative and qualitative ways. In that spirit,  may your investing ways serve you well in latter days…

None of these calculations account for inflation, which is a factor in considering the total amount calculated in today’s dollars vs. what it might be worth in the future. Inflation (like most interest rates) varies year to year, but has an historical average of about 2-3% per year. Conceptually, this calculation should be analogous to interest gain over N years. The illustration here was to show the power of numbers in geometric progression that is reflected in compound interest gains over time. This is a great advantage as shown here and it can be said that indeed, as the old adage goes (and mathematically verified here), Time is Money!

Happy $\pi$ Day!

Next Page »