Monday, October 17, 2022

Aether... or how I learned to love Supersymmetric String Theory

 Aether... or how I learned to love Supersymmetric String Theory 


If you were to write of the history of aether -the anomalous luminiferous substance that fills all voids- it would probably follow something like Joseph Campbell's “Hero’s Journey”: there is the “Call to Adventure”, which is the philosopher cum scientist’s call to describe some phenomena desperately needed for some explanation. From an empirical and scientific perspective, I would imagine it could start as Aristotle’s “Hand of God”, the basic explanation for describing inertia: he likens the reason why the thrown ball continues on its journey is that there is an unmeasurable force which guides the ball to its final destination. This explanation would last for quite some time, like hundreds of years.

Then comes “The Threshold”! Wait! Galileo? Why did you do this inertia experiment? That ball that fell from the top of the moving sailboat did *not* fall in a straight line? In the words of Diondre Cole on SNL, What’s Up With That? The “Hand of God” now moves in mysterious parabolic ways? Questions ensue, doubt is sown, and for hundreds of years later, aether becomes baked into everything ranging from Newton's “Action at a distance”, to Maxwell's EM propagation and Tesla’s induction. We think we need this, but not really sure why, and we’re not entirely sure what it does and how it does it. Not much else to see here for a hundred years or so.

Then comes “The Abyss”: the Michelson and Morley Experiment, and, to make it worse, a curb-stomping by Einstein who says, ala Fred Sanford, “No, you dummy”... there is no luminiferous aether and there doesn’t seem to be a good reason to have it in the first place. In fact, our measurements demand it not be there. 

Let us think about this fact from a different perspective: there is no medium for those “Good Vibrations”. Nothing permeates the space between each other besides the molecules suspended in the air between us. And that aural glow around you is not celestial; they’re just excited air molecules or bending light rays. 

But the arguments for this (scientific) search is deeply rooted in ‘Western’ determinism, seeking to identify causal relationships between a mechanism and (currently unexplainable) phenomena. Why is this so? Is our desire to identify (causal) relationships between theory and measurables simply driving another (super-string theory) deterministic explanation of our universe? Or are we really resolving and uncovering the mysteries of the universe?

This can be contrasted with the eastern idea of “Akasha”, which in Vendantic Hinduism refers to the ‘first’ element created. Akash literally means “sky” or “heavens”, and in this tradition, is the “basis” and “essence” of all things in the material world: it is the ‘first’ element ‘created’ (thereafter is air, fire/energy, water, and earth). The Buddhist interpretation is that Akasha is divided into limited (discrete) and endless (infinite) space.

Returning to the Hero’s journey, after the Michelson and Morley experiment, our description of the physical universe no longer requires any concept of ether… or does it? An uncomfortable outcome of Heisenberg’s uncertainty principle is that the universe can do funny things over very short time intervals. Dirac certainly had some ‘strange’ ideas on this and even suggested a) particles not being point-like to justify the propagation of superluminal interactions and b) a revived concept of ether where vacuum itself consists of a mixture of positive and negative stuff. In short, he proposes that chaotic randomly moving particles could exist with some strong caveats like covariance, rendering it undetectable via the M/M experiment. This generalizes into this ‘stochastic’ interpretation of quantum mechanics: the probabilistic nature of QM is not a limit of knowledge as suggested by Einstein but a natural consequence of chaotic aether. Moreover, the EPR paradox and other QM observations could be explained by this interpretation! Is this transformation? Atonement?

To make matters worse, the Super(symmetric) string theory is proposed to connect all the fundamental forces of nature into a single theory. The fundamental constituents of matter are Planck-length strings that vibrate at resonant frequencies. And so now, we must ask, “What is it that is vibrating in the first place”? Are we now in “Retum” of the Hero’s Journey? Is the Call to Adventure now the experimental verification of string theory? 

This all begs several questions.
1. As noted earlier, from a Western perspective, are we going down the determinism rabbit hole if we believe in the existence of ether?
2. Is this affinity to aether a construct of humankind?
3. Is it ‘unreasonable’ to say, “sending good vibrations”? 
4. Is this ether responsible for common interpretations of this touchy-feely ‘connectedness’ that some people feel is true?


Softcore references


Hardcore references

Monday, July 25, 2022

Philosophical Implications of Entropy

Preliminary readings:

https://thestandupphilosophers.co.uk/the-trump-card-of-modern-nihilism-entropy/

https://youtu.be/Cco0T7cj-B4

https://www.life.illinois.edu/crofts/papers/Life_information_entropy_and_time.html

https://thestandupphilosophers.co.uk/what-is-entropy/

and if you got this far....

https://www.hindawi.com/journals/complexity/2020/8769060/


Musing

The concept of entropy is broad and deep, with both physical and intellectual interpretations. In our physical universe, entropy is a quantity never less than zero, first introduced by a German physicist in 1850, Rudolf Clausius, that evolved into the second law of thermodynamics, primarily through the rigors of Boltzmann and his approach to estimating the most probable state in statistical mechanics. By linking to probability, Boltzmann describes entropy as not an existing mode and state of the general mass and energy of the system but the mode and state of the organization, matching, and distribution of these mass and energy. Since entropy is an estimate of the distribution of this mass in energy at any given state, the change in entropy suggests a change in how mass or energy represents itself. As the laws of physics would tell us, that change in entropy is never negative. The change in the mass organization is unidirectional by virtue of how one defines a change.

Almost 100 years later, Shannon delves into what “information” means in his study of information theory. He postulates non-randomness as “information”: a set system(s) that can generate random messages or signs with their own probabilities.

These two ideas have profound interpretations in a metaphysical sense. 

From the Boltzmann perspective, entropy is never less than zero (at least in the physical universe we experience). As far as I can tell, evidence that dS> 0 requires an increment of time to measure a change. So, does this mean that when dS~0, the state of a system is what it is when there is no corresponding change in time? Missing bits of DNA after millions of replications would think so.

Second: What is “information theory” anyway, and what does randomness have to do with information? Shannon entropy quantifies this in a purely mathematical way. Perhaps this is really just a relative argument since one doesn’t have information in a vacuum; rather, information is put into context with other information (which supports my general theory that mackintosh apples, with their colorful skin, is way more informative than plane old golden delicious apples… and yes, I will fight you on this).

Is the universe just another victim of entropy? 

What also needs to be discussed are these loosie-goosey interpretations of entropy and how they connect with philosophy in our modern-day lives. These are broad generalized statements: "… well things just fall apart so … [insert waxing poetry]”. But underneath all these sentiments is the realization that adopting a worldview that “nothing has meaning” because

A) of the inevitable collapsing of the universe.

B) my brain and neurons are two different things, so I give up.

C) my pastor told me so.

D) etc.,

Of particular note, A), has some (physical) legitimacy since the universe will tend to a state with lower energy (Quick cut to the “Entropy is justification for Nihilism memes”) … because it’s physics! Things break down, and we just must deal with the consequences.


Random topics of query:

1. The YouTube video purports that “we create order -and progress- at the expense of disorder elsewhere.” Do you agree with this assertion?

2. Is entropy really nihilism's trump card?

3. Does thinking of a universe with negative entropy makes sense?

4. What is the fascination with humans trying to create order anyway? 


Wednesday, March 31, 2021

Seimens Medical Systums releases new modification for KD-era linacs

 For immediate release: March 32, 2021

Seimens Medical Systums releases new modification for KD-era linacs

We are proud to provide a new method of permitting existing Siemens KD-eral linacs with an novel option of significantly improving the quality of care of patients with our new Digital Optimizer for Radiotherapy  via Klystron Synchronization, or DORKS. DORKS provides a means of delivering a tumorcidal dose while greatly sparing normal tissues from the harmful side effects from radiation therapy. Dr. Lee Van Cleef, head engineer and part-time gun slinger:

DORKS allow us to be in new markets and open new opportunities for research. We expect great things from DORKS. DORKS will help pave the way for our future here in Seimens Medical Systums.

 



Happy April Fools Day!







Friday, August 28, 2020

Medical Physics Games

Medical Physics Games in Education

I had the wonderful opportunity to learn about educational tools and strategies from Cornell's Center for Teaching Innovation. One topic of interest was diving into some of the challenges educators face for those in professional tracks, such as medical residents, interns, and students as they follow their educational trajectories into professions. Being involved in Medical Physics training and education for quite some time, I have come to appreciate the challenges Medical Physics graduate students and residents often face, as time to dig deep into content required for competency in their practice must be balanced with the time devoted in completing their projects or clinical rotations. 

Tools like flipping the classroom, digital media and resources, and other strategies which help create a safe learning environment heavily influence how well learners retain knowledge. Games can be a powerful tool to reinforce knowledge retention for youth. More recently, there has been a surge of research delving into how gaming could be deployed in medical education. 

There are a tonne of great education resources for Medical Physics resources (in fact, there are perhaps too many... which gives me a point of entry into another project aimed and linking that content with learning objectives as defined in the IAEA syllabus for Radiation Oncology Medical Physics education). But, there were not a lot of 'fun' educational activities targeted in Medical Physics. So that is where this journey begins!

I've started compiling a list of medical physics educational games as well devoting some energy to make some. Below is a growing list. If you have some to share, please e-mail me your suggestions!

Radiation Oncology Game - Intended Learners

Radiation Oncology Residents (Human and Veterinary)
Diagnostic Imaging Residents  (Human and Veterinary)
Medical Physics Graduate Students
Medical Physics Residents

Individual or groups
(more to come)

Groups
Family Feud - Radiation Oncology Medical Physics (X-ray interactions/sources, radioisotopes)

Counterfactual learning systems

Separating causation and correlation in AI systems is a challenge because most machine learning systems look for trends, but not 'counterfactual' information, which is more like the way we, humans, and doctors think.

People, like doctors, make decision based on what they know to be true and untrue, and build causal reasoning into a diagnosis. Most #machinelearning systems don't build causality: they are built on associations / correlations. We don't care the sky is probably blue when we get a cold, but we do care your T-cell count is low when you get a cold (causation vs. correlation). Counterfactual data? "Lets get a chest x-ray / ultrasound / CBC ..." i.e., some data that rules out other possibilities to see how symptom relates to disorder (directly or indirectly). But what rules can you build for machine learning? (Un)surprisingly this paper shows it can be simple, (because thats how *our* brains probably work): disease should be consistent with diagnosis, rule out stuff that isn't possible, and keep it simple: 1 Dx fitting M symptoms is better than N Dx fitting M symptoms. They go on to define things called "expected disablement" and "expected sufficiency". The former is obvious, but the latter is like "sufficient cause", and state theorems, one of which is that disablement and sufficiency are sufficient conditions for the rules above. But real data is noisy and murks the variables and so there needs to be a way to account for noise. (insert mathy stuff here). Thats all fine, but the litmus test is "How does this compare to actual clinical decisions?" In short, a physician achieves higher accuracy in diagnosing a disorder for simpler problems, and the algorithm outperforms for more complex problems. Thats good for rare disease classification. That makes sense as the story of #machinelearning and #AI in medical diagnoses suggests utility in a role as a 'decision support tool', but not a fully autonomous one. The difference here is that the model behaves more like a clinician would. For you Bayesians ... when you first learned Bayes' Theorem I bet you pondered "Why can't we do counterfactual inference in medical diagnosis? ...policy making? ... court decisions?". This article is a nice progression of how we can use AI based on causation - not just correlation. Don't believe me? Read for yourself.

Extracellular vesicle and particle biomarkers and AI

A very interesting article on extracellular vesicle and particle biomarkers and how they might be used in cancer detection.

 

https://www.sciencedirect.com/science/article/pii/S0092867420308746?via%3Dihub


There are gazillion authors from a bajillion institutions on this paper. Collaboration!


The gold standard to confirm cancer and other aliments is a tissue biopsy, where a small sample of tissue is extracted from the suspicious growth. But extracting a tissue sample isn’t possible in many situations, especially when there are other co-morbidities where the biopsy can introduce more problems than it attempts to solve.


So ‘liquid’ biopsies is another approach: stuff like drawing blood, lymphatic/bile, etc., which is not as difficult. But that stuff isn’t where the tumor is… its stuff floating around the body. Some of the gunk that floats around outside the cell are EVPs...or ‘extracellular vesicles and particles’. Basically they’re goops of stuff that float outside the cell, originating from ‘sorters of things’ in your cells. I (probably mistakenly) think of them as recipe pages floating outside the bookstore that sells recipe books. Except there are gazillion (actually billions of EVPs) recipes, and a gazillion books: trying to figure out what page came from what book would seem an impossible task, right? Well… this is where the story gets interesting!


This team used machine learning techniques to sort through all the EVPs based on sizes and other subcategories (mice/human, cancers). They found that the relationship between +10K EVPs and tumors in mice and humans were not the same (interesting since mouse models are used in so much research). They then sifted through all these possible markers to see if they could be used as a cancer detector.


How do you sort through literally 10s of thousands of markers for trends? Reliably? #Machinelearning, of course. They found the presence/absence of 13 common EVPs could be used to classify both lung and pancreatic cancers. But are those little floaters actually associated with tumors? In other words, is there a relationship between biopsy findings and the floaters?


While their dataset was kinda small, they could verify the biopsy findings with the floaters to +90% sensitivity / specificity (sensitivity is how well you can detect something (like how likely you are to stop at a sign that looks like a stop sign), and specificity is how well you can rule all other possibilities out (like how well you ignore the sign that looks like a stop sign but really isn’t). They then attempted to ensure that what they saw wasn’t just stuff you’d seen normally... not a trivial task.


What does it all mean? Maybe *earlier* cancer detection? Increased precision cancer detection? Dunno… but it is super cool that floaters in the blood could be so precise in detecting disease. These EVPs may be echoes of the body saying ‘something ain’t right’. We didn’t have the tools to be able to appreciate this signal until we developed the technology to detect the echoes.



Super cool.


Meet RoboBEER

Meet RoboBEER, a robotic beer pourer.


As you know, the demand for high quality beers worldwide has exploded over the last few decades. What drives quality? Well one way to discern quality is to objectively characterize features within the beer.


What features you may ask? Some of them are visual, like the color and foam-ability, such as maximum volume of foam, total lifetime of foam, foam drainage, size of the bubbles in the foam. But not just any idiot can pour the beer, as a Guinness lover will tell you, since a good pour is crucial. Fortunately RoboBEER can pull the ‘perfect’ pint: RoboBEER pulls 80 mL (+/- 10 mL) while monitoring the liquid temperature, assessing the alcohol and CO2 levels, all through your kids Arduino control board and a Matlab interface (yeay Matlab!).  


But what about more important features like taste? Surely no robot could do that right? No way. But… maybe you could predict things like mouthfeel from all the features obtained in by RoboBEER? You could capture descriptions of taste from experts through a questionnaire: 10 basic categories: bitter, sweet, sour, aroma in grains, aroma in hops, aroma in yeast, viscosity, astringency, carbonation mouthfeel, and flavor hops. Then, have them sample twenty-two beers. (What I would do to be a part of this study!)


Could you train a neural network to predict what the beer would taste like just based off the data from RoboBEER?


A ‘feedforward’ neural network was designed where, essentially, you take all the inputs from the RoboBEER (head size, color, etc), and the outputs from the tasters (bitterness, sweetness, mouthfeel) and see if a neural network can predict the taste based on those inputs. You do some fun math like principal component analysis to help with sorting all the data and patterns, pump them into the network for AI training and what do you get?


For the independent testing data, the AI system from RoboBEER data could predict what it a beer would taste like with an accuracy of 86%. What does this mean? Well… very likely, RoboBEER is a better judge of beer than you are. And it doesn’t even have to taste the beer.


Don’t believe me? Read for yourself.

https://onlinelibrary.wiley.com/doi/epdf/10.1111/1750-3841.14114