Where base units come from and how they are related to each other? The second, the metre and other base units are useful approximations for describing our universe. What we measure are mere ratios between measured values and base units. The fundamental question is how to measure the dimensions of spacetime with utmost precision if every part of the universe is constantly changing, and not only due to the thermal motion of atoms or molecules, but also quantum vacuum fluctuations, not to mention the expansion of the universe.
See – Base_units_preview
All base units are referenced to time. What is time? Rather, let us ask, how do we measure time? We certainly don’t measure time by chaotic changes, random changes, or changes in transition states between chaos and order. To measure time we need regular changes. When did time begin to exist and when does it no longer exist? The measurement of Time exists only and only because of regular changes. Thanks to changes that we consider regular – such as the various oscillations, starting with the pendulum, then through the quartz, and finally through the oscillations of electromagnetic radiation.
Let’s have the progression of a mathematically indescribable function. There are different parts of the function. And some parts appear to be regular oscillations. – see below
In the upper image we can see the parts where time can be measured. And then we see the areas where it cannot be measured – we do not have a reference regular process, in our opinion. Or how to verify the regularity of the pendulum oscillations? By an electronic quartz clock? How to verify the regularity of the oscillations of a quartz clock? By electromagnetic radiation from a maser? How do we verify the regularity of the oscillations of ELMG radiation from a maser? …… ? So we assume that the ELMG oscillations are regular.
Imagine the world without regular stabilized events, e.g. so called oscillations. Imagine the world where are no constant rulers like a meter or yardstick. Such world seems like a garden where are no fixed rulers. How and by what to measure inside this environment? There are no regular stabilized grouped events like oscillations, or yardsticks, no fixed shapes.
Imagine a garden or a landscape where we came or were born. Everything is changing, more or less. We need to measure the area of the future planting bed. What to use as a unit of measurement? We use something that hardly changes. Like an oblong, solid stone. That’s the unit of length. Then we just take a rope, mark the length of the stone repeatedly on it and we have a ruler. We can have more ropes (rulers) and check them occasionally with a basic unit of length. Let us not forget that our basic unit of length is taken from the garden, the more or less variable garden. We have no basic unit from anywhere outside the garden.
It would be a very unpleasant situation if everything was changing, in short, even the “stones” were changing like plants. Our measurement of the area would not last long – or would only be current for a very limited time:-)
But fortunately we have a stone that at first sight is indifferent to all other changes. But as time goes on, we find that even the stone is somehow smaller, perhaps by the decreasing pattern on the surface of the stone. What to do? What to do? We won’t give up and we’ll see what the stone is made of. So we’ll break the stone, another one, not our base unit. Then we find that the stone is made up of indivisible particles. These indivisible particles change rapidly and unpredictably within a given tolerance, e.g. plus or minus half a particle. And when we take a closer look at everything in the garden, we find that everything is made up of indivisible, rapidly changing particles – the flowers, the soil, the air and even our observing instruments such as a magnifying glass or a microscope. And it can be supposed that we, as observers, are also made up of indivisible but rapidly changing particles.
We need “regular” and “stabilized” grouped events (units) for our abstraction – to recognize shapes, structures, objects and processes, to name them, to count them, to predict them, etc. We use these units to measure other shapes or structures that change. But beware, we are defining peaks of stabilized grouped events. We use them to measure other peaks of grouped events. It is difficult to suppose that these peaks will not change in accordance with the surrounding fluctuations. These grouped chaotic events can be describe mathematically. To put the peaks of their appearances apart and numerate them. Actual peaks of probabilistic distribution of appearance of excitations grouped together, respectivelly solved (mostly average) peaks represents the number for distance, time, temperature, intensity, etc. It doesn’t have to be like this all the time. Such changes there are a big source for potential big deviations. See butterfly wing effect – the waving of a butterfly wing causes a chain of events on the other side of the Earth. But there are hundreds and hundreds of trillions of wing movements in the chaotic fluctuations – and the question is what gets cancelled out, what gets amplified and what stays as background noise.
We can take base units only from our space, from our universe. There are hardly any external units of measurement. External units from outside our universe. As in the case of the measurement of the expanding balloon or circle or the extension of the heated rod. In these cases we have external meters. But if the unit is part (scratches) of the heated rod or the balloon or the circle, what do we measure? Will there be any change? Certainly, not. The unit (scratches) is also changed by heating or by inflation etc.
Consider an experiment – Let´s measure the difference in length of a long steel rod as a result of thermal expansion. Connect a steel ruler on cold steel rod and you have a starting value. Heat the rod and again connect the ruler and we get a different, higher value. The heated rod has elongated. But if I keep the ruler on the rod for a long time, the difference disappears.
Not to mentions, the scale is good for normal comparisons, but dividing 1 m (alloy) by one million or even a larger number is a question mark, because we do not know the effects at such small levels relative to the alloy of the meter. Now we have the number of electromagnetic oscillations as a measure of the meter. That’s good – the accuracy is increasing rapidly. But again, to divide one oscillation by one million or even a larger number – again, a question mark, see above. Thus Planck time, a number so small relative to one oscillation of the atomic clock.
How does light travel such a small distance in a vacuum as the Planck length? Do we know anything about it? Do we suppose continuously and smoothly? Well, the vacuum isn’t that continuous and smooth.
Imagine the next situation – see Figures below
There are several „regular“ shapes – blue or white circles. See their nearly regular appearance and their nearly regular distance among them. How to choose a base unit among them? For small areas we can take distance and size among circles as almost regular. If we look closely, they are not regular. How is possible to explore this? With a help of the choosen base unit, that is irregular as everything arround? There is no absolute unit no outside ruler, only ratios among the selected base value and the measured values. See the image below – there are biological growing plant cells
These plant cells are separated from germ cells by their dividing. Then these cells are growing like our spacetime. But we are inside these cells. And we can´t use any external rulers. We can use only internal rulers like the size of cell or its part.
Our universe is like an elephant. Try to measure a growing elephant. From baby elephant to adult. I don’t mean measuring the elephant with an external ruler, but an internal ruler derived from the elephant.
What do we choose as the base unit of distance? Probably something clear, clearly defined, like a bone or a vertebrae. And as the elephant grows, the relative sizes of the elephant will change. But our basic unit will also grow. We cannot find in the elephant, as in the universe, a fixed, reference point or distance.
In the future, instead of the cesium atomic clock, the base unit of frequency (time) could be replaced by frequency of the relict radiation. The advantage of this radiation is its omnipresence in the universe and its homogeneity. The wavelength of the relict radiation is approximately 1 mm. Thus, it is usable for our instruments. The “biggest advantage” is that the universe will stop “ageing and cooling”.
E.g. speed of light in vacuum – 299 792 458 m/s (meter per second)
From system of base unit SI we have a definition for one second 9,192,631,770 periods of radiations of Cs 133 (atomic clock)
1 meter is defined as a is the distance travelled by light in vacuum for exactly 1/299792458 of a second
add the definitional values to the expression for the speed of light
1 s = 9 192 631 770
1 m = 30,663 319 = 1/299792458 * 9 192 631 770
299 792 458 m/s = 299 792 458 * 30, 663 319/9 192 631 770 = 9 192 631 770/9 192 631 77 = 1
the speed of light in vacuum is exactly 1
One of what? Well, just one. In the same way, we find that the mass of a hydrogen atom is exactly equal to the mass of a hydrogen atom, or a carbon atom weighs exactly the same as a carbon atom. Nothing more, nothing less.
Next example – we have the circle that is expanding (imagine a 2D bubble) – this circle expands next to us – on paper, for example, and we have to measure the speed of expansion.
What to do? What would? Choose a coordinate system with the centre (zero point) in the expanding circle. Next, we choose a unit of distance and a unit of time. We measure the initial radius R1, then the final radius R2, and the duration T of the expanding of the circle between these two radii. The result will be the speed of expansion V. Next, we are interested in when the circle began to expand. That’s easy. We know the speed of expansion V, we know the radius R2 and we can calculate the beginning of the expansion. It should be added that we have chosen our coordinate system, including units of distance and time, which is independent of the expansion of the circle. In other words, we are outside the expanding circle.
But what if we want to solve the case where we ourselves are part of the expanding circle. Let’s imagine two-dimensional beings who live in a 2D dimension and want to know the speed of the expanding circle. These 2D beings will do as we do, choose units of time and distance and determine the imaginary center – where all expanding objects converge (from observation). But their units of time and distance are internal part of the expanding circle. so they will expand too. Here the situation is much more complicated. 2D beings do not have an external coordinate system independent of the expansion of their spatial circle.
From our 3D view outside their expanding circle, we can see how their unit of distance also changes. Inner base unit of distance from the outside view will decrease towards the centre of expansion and vice versa the size of the base unit will increase with growing distance from the centre of expansion. For this reason, 2D entities measure a much larger diameter of their circle. What’s more, if they go more to the center of their circle, they will never get to 0 – it is an ever decreasing infinite series of units. Not to mention the basic issues around the definition of distance and time. When did time begin to exist? Time began to exist along with matter – with objects that have a speed of motion significantly less than 1 – the speed of light. With objects where there is a slippage between time and distance. With objects that have a non-zero rest mass as opposed to photons.
The same problem will be with determining absolute motion, absolute temperature, and other “absolute” values inside an ever-changing space where, of course, relative proportions are valid and where it is possible to determine very well what is about to happen. See internal combustion engines, airplanes, nuclear reactions and much much more. What we have learned in a given limits, e.g. about nuclear reaction, we are able to repeat it succesfully at any time and anywhere in given limits in the world.
As can be seen from the previous examples – the validity of base units, as well as the validity of all comparative measurements, is limited to a certain range – within the given limits where we know the properties of natural processes. And we are able to measure them, repeat or otherwise analyze them and thus predict them.
E.g.: take the size of quantum fluctuations as the base unit. That’s not possible. For these are always changing, they are always different, but still equally chaotic. See the untuned TV screen – chaotic fluctuations. So we have to use a tuned TV screen with a TV program on it. How do we choose the base unit on the TV screen? There is only one requirement – the existence of a TV program! Only in it we can look for base units – time or length or other. We have to watch the TV program carefully. The unit should be regular, or rather should appear regular to us – for example, the change in the height of the sun above the horizon (sunrise, culmination, sunset). After a while we find that we need a more precise measurement. Thus, for a precise measurement, we choose the wavelength of light. Over time, through precise measurement, we find that both the wavelength of light and all the objects and processes in the TV program on the screen are influenced by chaotic fluctuations. In other words, processes that we used to think of as regular are not so regular. The effect of chaotic fluctuations is very small for a regular TV scene. But at the micro level, the effect of chaotic fluctuations is fatal – see the wavelength of light as a base unit.
Precision measurement of the energy value at the atomic level can only be made at the cost of losing information about the duration of the measured energy. In short, we know the measured energy, but we do not know the duration of the measured energy. And vice versa. If we know the time, we do not know the value of the energy. Another result of Heisenberg’s uncertainty principle.
Energy, and therefore matter, is the source of gravity and affects gravity. Bound energy (matter) exists in certain sizes, packages – in short, in quanta. Free energy (matter) can be freely continuous, divisible. There are no quantum frequencies. The frequency of a free electron can be any value as opposed to the frequency of a bound electron.
Bound matter (bound energy) is a quantum system. This is the source of quantum gravity. Just as free energy (matter) is the source of gravity. There is a fundamental difference between bound and free energy. And that is in its action. For the action of the gravitational field, see general relativity.
Relativity is subordinate to the quantum system and the quantum system is subordinate to relativity. There is an interaction between them.
What can I say? That the system of measurement, the system of base units, is limited both from above and below. And it is not possible to freely divide base unit and other physical values into many levels without knowing the properties of the natural processes at the relevant level.
The basic problem – the expanding universe versus base units. The universe is filled with matter and radiation. Matter and energy have the same basis – quantum fields. Both matter and particles are excitations of the quantum field. Basically, there is not that much difference between radiation particles (photons) and matter particles (quarks, protons, etc). Both are material, both carry energy and have mass, and both are wave and particle in nature. With matter, particles are always just slower than their waves, and a lot of the “motion” here is hidden in untranslated forms. Whereas with radiation, the particles (photons) are as fast as their waves – all the energy is “free”. This is why matter is sometimes described as frozen or bound energy. See images below – there are two basic models of our universe. Matter as particles – marked with circles with different size. The question is whether particles change or are stable during the cosmic expansion. And whether we’re able to find that out. Find out using internal units.
There are two basic models of our universe. The first model with a beginning and an end point, here the universe expands and then contracts. The second model with a beginning point (it may not be a point, but that’s later) from which the universe expands to the big rip.
Both models have one thing in common – the base units are chosen from the universe itself. Base units are internal and not external. The universe changes (it may change in other ways than the above two models) and so do the base units. Do particles expand like radiation ? (see relict radiation versus the whole spectrum of electromagnetic radiation). Anyway, base units for time, distance and others are derived from our universe. We don´t have outer base units for distance and time. No outer ruler. No outer oscillations. We have only inner rulers, inner oscillations. We have chosen one of oscillations as the basic one for others. See the definition of time based on the cesium atomic clock – 1 sec. is nearly 9 bil. oscillations.
It is possible to plot a triangle on a spherical (or other) surface to determine the degree of curvature of the space. But only if the gravitational field is uniformly distributed on the spherical (or different) surface. The ruler remains the same, although curved (which we cannot detect), but the distance between the lines of the ruler is constant. If the gravitational field is inhomogeneous, then in a given part of the spherical (or different) surface there will be not only a change in the curvature of the ruler, but also a change in the distance between the lines of the ruler – which of course we cannot detect. We are inside and not outside with rulers.
The basic question is – do the basic units change in accordance with the expansion of the universe? Or are there stable basic units like the “good old” platinum-iridium alloy that seems not to be expanding. Or is there a slippage in matter between expansion of the universe and a change of e.g. the length of a material object like the alloy meter above.
Consider an elastic membrane, a rubber sheet in the shape of a circle. We stretch this rubber sheet equally on all sides. Let’s put different shapes on the sheet. Waveforms of oscillations, for example. It’s clear that the wavelength of the oscillations will increase as the sheet stretches. But relative to what? Where is the reference point? Relative to us, the observers outside the elastic rubber sheet. Imagine that a two-dimensional sheet is populated by two-dimensional creatures that know only two length dimensions and time. These do not measure any change as the sheet stretches. For their bodies are an indivisible part of the sheet, hence they also stretch, including their scales. See images images below.
Let us go further – stretch or contract the stretched elastic rubber sheet locally at several points (points) A, B, C. This gives us local changes in curvature. Creatures inhabiting the sheet are able to measure these local changes of curvature. However, this has one requirement, creatures must be outside the local curvature changes. Their reference point is outside the measured local change in curvature. Then creatures will measure a wavelength´s difference. The difference between the etalon (oscillation) and the measured oscillation at the local change of the point A. The etalon is the wavelength of the cesium atomic clock – all integral to the whole elastic sheet. That is all. We do not and cannot know the wavelength of the etalon of oscillation. It is one. One of what and relative to what? To itself. We can only measure differences between the oscillation etalon and the measured oscillations.
But an outside observer ( the three-dimensional human beeing) can measure how big the etalon wavelength is. He can also measure how much the wavelength changes as the sheet stretches. But even this external observer is limited by his three-dimensional expanding universe. His cesium atomic clocks do not allow us to determine their absolute length or duration – we only know that 1 sec. is approximately equal to 9 billion oscillations. So there may be another observer above the universe, but even he will be limited by his superuniverse. And so on ad infinitum. A kind of relativity of distance to infinity. It’s reminiscent of Cantor’s exponential sets of infinite sets or Gödel’s theorem on completeness and contradiction-free axiomatic systems.
Let´s go back to the reality of an expanding universe with everything that is part of it – including observers, nature and the measuring instruments.
Imagine a flat space only with two dimensions that expands (see images below) two-dimensional space filled with adaptive quantum foam – all three images show us the expansion of the universe – a hyperbola, a circle or another shape that can change. Is it possible to detect this? By using measurement methods that are internal parts of such a universe?
inside the flat space, the base unit of length – the ruler – is selected (two thick lines). There are two choices for selecting the ruler. Firstly, the ruler changes in accordance with the expansion of the flat space, which is filled by the adaptive quantum foam (black thick line on the left). The second case is different – here the quantum foam is bound into stable structures. The ruler is selected from these structures (gray thick line on the right). In this case, there may be a situation that the ruler remains the same during the expansion for a certain period of time. We do not mention the question of the emergence (disappearance) of bound forms of quantum foam in the early (end) stage of expansion.
How to find out the expansion of the universe? Not to mention that the universe could expand differently in different places.
How to measure the density of an omnipresent ocean of quantum vacuum fluctuations? Not only can we not directly see the fluctuating quantum field (only indirective evidence) – the so-called zero vacuum fluctuations, but we also cannot know what the density of the quantum fluctuations is. And whether the density of the fluctuating environment changes or remains constant. For example, it is a difference in the maximum speed in the universe – the speed of light. We have hypotheses that propose to measure the relative velocity between high energy gamma radiation versus low energy radiation (UV, light, infrared). Gamma radiation should be slower velocity than UV radiation because of the quantum nature of the ” support” medium (quantum foam). Just as in everyday life, a car with larger wheels will go faster (relative to the roughness of the road) than a car with smaller wheels that “copy” the roughness of the road. Both wheels have the same circumferential speed, the small wheel turns much faster than the big wheel, but the small wheel has to copy the rough ( lumpy) surface of the road (up and down). This is not a rejection of special relativity, but a confirmation of it. Theory of relativity is exactly valid in flat spacetime without metric defects. That is, an ideal circumstances that does not exist in the universe. Just like an ideal point, an ideal line in mathematics or an ideal gas in thermodynamics. In other words – in an equally “dense” quantum field, light will travel equally depending on its wavelength. Not to mention possible changes in the “density” of the quantum field. This cannot be verified by our units of measurement, but neither can it be negated.
What is the age of our expanding universe? Measured by the universe itself? Measured by the rulers (oscillations) that result from the properties of the universe? The basis of science is a unit of measurement – one distance (number of oscillations). Probably such unit is changing over time. How to estimate the past or the future, especially what was before the creation (selection) of the unit?
It’s impossible to measure time near a singularity, close to zero. We don’t know the nature of the events near our calculated absolute zero. It is impossible to reach 0 K in the same way as to reach 0 sec. Especially with derived units from something, not from nothing. The measuring frequency-ruler has a period. Near the singularity, this period would be higher and so the frequency increases. In other words, there is a change of ruler.
We are trapped with base units or base rulers. They’re just relative not absolute. We are able to measure e.g. a prolongation of thermal expansion of a steel. Such measurement is provided by external ruler independent to prolongated steel. How to measure the prolongation of the steel by inner ruler?
See the expansion of our universe. There are three possible processes:
– units or rulers do not change, they are stable despite ambient changes
– units or rulers change in accordance with the expansion of our universe
– units or rulers slightly change (slipping) in relation to the expansion of our universe
We suppose rulers do not change in relation to galaxies moving away to each other. But it’s hard to believe that there are “fixed points” inside the universe that keep their proportions despite all the changes, especially when everything is made up of those “fixed points” – atoms, molecules, etc.
Let’s go back to the question that occasionally occupies our minds. What’s going on in Pleiades (e.g. near Electra or Maia) now – right now of our earthly time. This query is meaningless. Anyway, there’s no point what’s happening in one place of the Pleiades now, right now, think in our earthly time. Every part of our universe has its own time – there is no coincidence. Same meaningless query as what is the absolute value of frequency 1 oscillation. What an absolute size? We cannot know or measure the absolute size of one oscillation. We have no external rulers. We are able to measure only and only the ratio of one oscillation to another or more oscillations. We can select one oscillation as base unit for others and thus compare (measure) not only the oscillations, but also all irregular happenings in the world.
An overview of the basic unit definitions shows the following:
the basic unit of the base units is time – the definition of time, then we get the base unit of length and mass.
Roughly written — we can reduce all units to frequency — the frequency determines the time, length, and mass. The higher the frequency, the higher the mass. The same is valid for temperature – or thermal movement of atoms or molecules. But it’s not so clear here. Heat energy versus temperature. Thermal energy is given by mass, thermal capacity and temperature. The biggest problem is the thermal capacity, which is difficult to define, and if so in a small range of temperatures. The reason is the internal bonds in molecules. After that the properties of atoms – chemical elements and in the end the excitations of quantum field.
Very important short remark about temperature. Briefly – the zero point degree celsius (0°C) temperature was chosen by A. Celsius as the temperature at which ice melts or water freezes at a defined atmospheric pressure. 100°C means the temperature of boiling water at a defined atmospheric pressure. These two points are determined by the temperature scale. As well as two points in mathematics, a straight line is determined. O.K. Go on. Scientists (Boyle, Mariott, Gay-Lussac, Pascal, etc.) investigated the behaviour of gases. Change in their volume depending on temperature and pressure. At constant atmospheric pressure, they found a reduction in the volume of studied gases. Air, nitrogen, hydrogen, etc. The reduction in temperature leads to a reduction in the volume of gas. Expressed mathematically — we get a linear dependency — a line with a given direction. How many different gases so many different directions. That was the premise – see the figure below.
But what was a tremendous surprise was that these divergent directions after their extrapolation meet at a single point, or in a given area, according to the accuracy of the measurement. See next image.
Furthermore, it has been developed on the basis of passed measurements of the theory of ideal gas. This theory works with a gas that behaves ideally – without interfering with internal particles. Light gases such as hydrogen and helium are closest to the ideal gas. But these gases condense at a sufficiently low temperature. Whereas the ideal gas does not condense until temperature, when its volume is reduced to zero. This is equivalent to a temperature of -273.15°C. Temperature absolute zero.
It should be added that this temperature was determined only and only by extrapolating to the ideal gas on the basis of measurements made for real gases.
We also know from the third law of thermodynamics that this temperature is unreachable just like the speed of light in according to the special theory of relativity. But in thermodynamics, we already know negative absolute temperatures – spin systems. This is not a fabrication, nor is it nonsens – it is a reality. On the basis of which quantum generators work – masers and lasers.
Let’s go back to the ideal gas theory. Let’s think of this gas as particles, or zero-size points that vary in their positions. In other words, these points are indistinguishable from each other (as are electrons in physics). We know from Cantor’s theory that one point of zero size can contain infinite points of equal, zero size.
Let’s think of the ideal gas as zero-size points. These points are distributed in a certain volume. When we heat the gas, the volume, that is, the distance among ideal points increases. When we cool the gas, the volume, that is, the distance among ideal points decreases. This ideal gas can be cooled to an absolute zero temperature. In other words, all points will be compressed to one point. As Cantor’s theorem allows us. But we’re in the real world. Individual points (atoms, molecules) are given by three different particles – protons, neutrons, electrons. However, we can reduce these particles to excitations of a quantum field full of vacuum fluctuations. In other words, we can build a model where the ideal points are represented by excitations of the quantum field (“foam”). Call these excitations real points. Real points do not have zero size, however, but they have a certain size in the probability range – see the Gaussian curve of normal distribution with a maximum in the middle.
The entire beauty of the world and the Universe throughout the history is given only and only by the diverse spatial distribution of real points. We get to know this better on the basic atoms of organic elements, where carbon, hydrogen, oxygen and nitrogen atoms make up more than 90% of the basis of all organic compounds – life from protoorganisms, to viruses and bacteria, to complex organisms such as plants and animals including humans. The only difference between a virus and a human is the number and distribution of the basic real “points.”
Optimal temperature in the world between +10°C and +30°C – there are most elements and their chemical compounds in three forms SOLID, LIQUID and GAS – See an image below
There is a rule – the higher (lower) the temperature, the lower the diversity. Not only chemical elements and their compounds but also the diversity of organisms.
But let’s return to thermodynamics. It is not possible to press our real points to zero, or close to zero. Then the energy of one point could be greater than the energy of the entire universe – see Planck’s equation for the quantum of energy E = hf (h- Planck. konst., f -frequency). It is not possible to think of a zero real point size. This brings us to the singularity – zero volume along with infinite energy. Singularity is a very unpleasant state that arises from our ignorance of reality at given level. For we cannot use extrapolation regardless of the basic principles of the existence of real points. Their existence is given by excitations of quantum field in given space. Without space there is no excitations.
A real point (electron, quark) means a vacuum excitation through the entire space of the universe with a given maximum (it makes no sense to wonder whether in the middle). The probabilistic range of particle excitation is the real point. The real point annihilated after a while, and then a new one is created (arises) almost in the same place. The physical process of annihilation and creation. In fact, the real point keeps its position. The rule is, always a new point with a new position nearly at the old point. All the real points are still being restored.
These points form atomic nuclei through the bonding (strong nuclear interaction). The atomic nucleus is a set of varying numbers of real points – from hydrogen to uranium. I do not mention the weak nuclear interaction responsible for the decay of atomic nuclei. The atomic nucleus is a set of real points. From one set with three points – the atomic nucleus of hydrogen, two sets forming the atomic nucleus of deuterium, then four sets forming the atomic nucleus of helium to the atomic nucleus of uranium with 238 sets. Each set has the same number of points – three real points (quarks) with the exchange force of gluons. Thanks to strong nuclei interaction these sets are able to be together. Thanks to electromagnetic interaction, atoms exist – sets of points with “orbiting” points around it. We call such “orbiting” points electrons.
Atoms are the most stable structures in the universe. They’re hard to change. Atoms connect with each other due to valence electrons with given properties. Next, through the elecromagnetic interaction, we have molecules. Let´s call them structures. They are given by sets of grouped real points called atoms. Molecules are no longer as stable as atoms, they split or fuse more easily.
Summary – the real points that are still restored have given properties. Properties of four basic physical forces (interactions). These properties still hold regardless of the annihilation and creation of real points called particles.
The whole world can be reduced to the level of real points with a certain size in the probability range – see the Gaussian curve of normal distribution with a maximum in the middle. These points always have given properties at least in pairs. A real point in itself is meaningless. Given properties – four basic physical forces. Strong and weak nuclear forces, electromagnetic and gravity forces.
The real points interact with each other by four physical forces. But there are limits. Strong and weak nuclear forces keep themselves for a short distance (to the size of real points). But electromagnetic and gravity forces keep themselves for unlimited distances.
Everything – all matter in the universe (chemical elements, molecules, organisms up to the human brain) consists of these real points grouped together into structures of different levels of complexity. The only difference among atoms, molecules, protozoas, organisms or humans is in the amount of real points and structures. Shapes and structures are grouped together into different arrangements (level of complexity) with each other in accordance with given properties (bonding laws). All materials in the world are just suitably grouped interacting structures of real points in accordance with given bonds at each level.
These bonding laws exclude all possible combinations of real points grouped together. Determine the properties of individual structures that consist of simple shapes – real points. These structural properties determine the possible arrangement of atoms in chemical molecules. Not all options can be realized, regardless of given structure properties. They can only be combined into different compounds according to their chemical properties. It is not possible to create all possible combinations. See hydrocarbons – H(hydorgen), C(carbon), O(oxygen), N(nitrogen). How many combinations are possible from the previously mentioned elements HCON (not counting phospohorus, magnesium, calcium, sulphur, etc). Is there a difference between HC or CH? Is it possible to have HC4 instead CH4-methane? See the tree-like structure of hydrocarbons. See methane, ethane, propane, butane. Unexpected tree-like arrangements with increasing complexity.
It is curious how certain very complex structures of grouped real points and structures can perceive and distinguish other structures of grouped real points. Where is the origin of perception and distinguishing? Surely these are not real points in themselves grouped into very complex structures. The origin of perception is to be found in the bindings among real points (given properties). Not only the four basic physical interactions, but also the bonds of higher levels – chemical, biological, psychological, etc.
The mutual arrangement of real points together with their given properties culminates in perception – self-awareness of oneself, one’s limitations and also possibilities. And furthermore, the ability to learn, the ability to obtain experience on the basis of early perceptions, the ability to create the new and previously unsuspected in nature – artworks, musical songs, sculptures, inventions, technical processes, handicrafts, … , etc.
The basis of worldly examples is not an ideal point or an ideal line or an ideal gas, but the basis is a real shape represented by the real point with a probability distribution of its appearance. Every probability distribution is different from another and thus every shape is different to each other. No two shapes are the same, just as no two probability distributions are the same. Likewise the grouping of shapes into different complex structures – from chemical elements, to molecules, to minerals, to organisms, to plants, to animals, to people and their creation – because evolution goes on.
It is impossible to divide probability distributions. Or to examine it using another probability distribution with a higher frequency – hence energy that disturbs the observed subject.
A model of the universe with a suitable distribution of real points can be a muster for a linguistic model. I mean for english language with 26 letters.
There’s one line at the beginning. From this line, we define a segment with a given distance. And we have a basis for shaping 26 letters of the alphabet from A to Z. These letters have the same basis – a segment. The letters of the alphabet differ only and only by changing their shapes. Notice: But if we use e.g letter A 100times or more then every appearance of letter A is slightly different to each other.
We know very well that there can be tremendous amount of possible shapes from one segment then existing 26 shapes (A to Z). The basic requirement is good distinguishability. So we have 26 letters of the alphabet. From these letters we can create a lot of words composed of one or two letters up to 10 or 15 letters. The vocabulary of each language contains a few thousand words for routine frequent use, and a few tens of thousands of words not so often used. Words composed of one or two letters up to 10 or 15 letters could be a tremendous amount. Like with real points there are more and more combinations with appearance of them then basic nearly 120 chemical elements of the periodic table.
Notice: the written word apple, if we use the word apple 1000 times or more, then every word has its own slightly differences to each other like letters. Compare with biological species – see apples. See lots of apple varieties varying in colour, shape and yet they meet the basic definition of a biological apple species. And most importantly, two apples through the history of the world are alike.
Notice: Indeed, all letters of all languages worldwide can be replaced by a suitably shaped section or a few suitably shaped sections (Arabic or Chinese, Indian or Japanese).
In the end every different part of our universe could have the own different word. The word apple represents certain morphological characters that are common to a certain biological species. Likewise, the word snowflake clearly defines the hexagonal arrangement of ice crystals. Although we know every snowflake in the world is an unrepeatable original. Exactly in the same way like two light wavelengths in the line spectrums differs to each other and yet they meet the basic definition of Planck constant.
It is impossible to describe everything. Every shape, every structure and every process differs to each other. How many are different shapes so many are describing shapes. Where to find describing shapes? For better understanding we must make sets with likewise characteristic.
An infinite number of shapes need to be formed, reduced, grouped into a limited number of shapes that will be similar. Create sets of shapes with given properties – common characteristics while respecting the originality of each shape. See maple leaves, clearly at first glance we identify maple leaf. But if we take a closer look at all the maple leaves we have identified, if we look at them in great detail, we find that all the leaves differ from one another. These sets — called biological species change over time. Not only ontogenetically (development of the organism as an individual from germ to adulthood), but also phylogenetically (evolution of organisms through geological epoch) – see clearly visible differences between present-day plant life and perhaps Devon plant life.
Already here we are offered a simple definition of biological species and indeed all natural objects and processes in the world. Easy to distinguish. The maple leaf differs rapidly from the oak leaf or from the lime leaf, or from other leaves. Similarly, a liquid is clearly different from a solid or from a gas. Whereas determining the type of gas or liquid is already harder. But even here, there are methods to clearly divide oil from water or air from steam. And so we can go deeper. This is where the intensity of observational or dividing methods increases. Distinguish types of oils or degree of moisture in water steam. And one can go even deeper. Eventually we come to the conclusion that there are not two exactly the same oils in the world, or two completely identical types of petrol or exactly the same snowflakes.
And on the other hand, we can generalize. Matter comes in four basic forms – solid, liquid, gas, plasma. The only difference between a solid and a liquid or gas is in the stiffness or degree of freedom of chemical bonds. Solid, liquid and gas are common occurrences of atoms. Stable atoms, non-ionized atoms. And that makes this trio (solid, liquid, gas) radically different from plasma. In plasma, atoms are not stable, but atoms are more or less ionized. With free electrons, that is. In the event of very high temperatures, atomic nuclei may fuse or divide.
The basic physical classification of the world
The base of current physics – omnipresent quantum field, that completely fills the universe. This field is the supporting (excitation) environment for the following effects such as radiation or matter. The radiation has no restmass as opposed to matter
Next, we can divide radiation by wavelengths of radio, microwave, infrared, light, ultraviolet, röntgen, gamma radiation. How to further divide ultraviolet or gamma rays? Hard or soft? We can divide the light radiation into pretty rainbow colors ranging from violet to blue to green to yellow to red. And artists can go even further – to distinguish degrees of saturation. But this resolution is no longer done by the machine, by the measuring apparatus, it’s only done by the man because of his perception.
Let’s get back to matter. Matter is divided into ionized and non-ionized. Ionized matter is plasma. Non-ionized matter is divided into solid, liquid, gas. Furthermore, the solids are divided into crystalline, amorphous. Liquids are divided according to viscosity – oil, water. Gases are divided according to their density. And in this way, we can continue to divide up to a point. And that limit is the originality of each observable shape or process.
The basis of all sciences, the basis of all crafts, the basis of all art, the basis of all human activity is a clear resolution. Without resolution and sorting facts, science cannot exist. Distinguishing is also needed in the craft, as well as in the arts.
We are still left with the question where in the chaotic ocean of quantum fields the regular or quasi-regular structures come from. Why they hold together for some time, why they evolve in spite of the surrounding and internal chaos.
Imagine an ocean, an ocean full of different random shapes (a quantum field model with random vacuum fluctuations). In this ocean we suddenly see, we are able to distinguish regular waves, regular shapes and structures and processes among them, which at the micro state are changing in accordance with the ocean, but in the macro state they are stable or quasi-stable (constantly renewed through annihilation and creation). At first we cannot distinguish what it is, but after a while we can recognize the evolution of a shape from its origin, birth through its development to its end. See phylogenetic evolution. In the same way, shapes change ontogenically – by speciation.
For better illustration see – quantum-field-the-basic-state.pdf
See two images below
The first image shows two-dimensional space with the uniformly distributed square points. And the second one shows two-dimensional space space filled with square points grouped in different shapes. What happens if we stretch these two spaces uniformly from the outside? The two spaces will be equally enlarged, including the inner square points. We’ll know the degree of magnification since we’re outside the space. But a possible “flat” inhabitant of the space will not know that his space is being stretched. To him, all the scales inside are the same.
At the top left we see no longer a space of squares points, but the space with irregularly distributed variously shaped points. And on the top right we see the space with probabilistic appearance of variously different shapes. These shapes represent excitations of the quantum field. Constantly renewed through their creation – annihilation and creation again, etc. Such excitations are so-called particles or waves. Let us imagine a uniform expansion of two mentioned spaces from the outside. Everything will be the same again, but enlarged. Just stretching the space outside has no effect on the reciprocal changes inside that space.
Now imagine a space defined by chaotic fluctuations. Whether gas fluctuations or quantum fluctuations – it doesn’t matter now. This space expands, or is allowed to expand at a certain speed. For if the speed of expansion were the same as the average speed of the fluctuating particles, the fluctuations would be no more either. The condition for sustainable fluctuation is that the speed of expansion is less than the average speed of the fluctuating particles. Even if chaotic space expands with some speed, there will still be no change in the base units and interrelationships among fluctuating particles, except if the particles or waves are somehow locally bound to their appearance. A kind of local expansion, resp. lesser expansion the other way around.
Sudden blowing up of space in the initial stage – the so-called inflationary process. Certainly, the initial inhomogeneities, chaos, or order are preserved. But these inhomogeneities, chaos or order will no longer be able to interact with each other because of maximum fluctuation speed. See the speed of sound in air, such speed is approximately equal to the average speed of the fluctuating molecules. Similarly, the speed of light in a vacuum is probably equal to the average speed of a fluctuating quantum field – the creation, annihilation and re-creation of wave particles. We can very easily find the average velocity of fluctuating air molecules at a given temperature. We find it because we are outside of the molecules, we are outside even with our base units. But to find out the average speed of quantum fluctuations is impossible. And we’ll never have that possibility to find out this speed. For we ourselves are part of quantum fluctuations, including our base units.
Even if we expand or let expand chaos or order x times, we still get a chaotic or order structure. Thus, we need local changes (expansion, compression, formation, etc.) in addition to global changes (expansion). The local changes must be different from the global changes. Let us now leave apart philosophical considerations on the subject of how stable forms will arise in a chaotic environment – events or changes that are differentiable from each other and “constant” for a certain period of time. The question is how to choose base units in this environment. Base units of time, length and mass. One thing we know, the units must be recognizable and “constant”. Certainly we can take as our basic unit the minimum amount of bound matter or energy – the quantum of energy – the Planck constant h. But even h is not 100% stable. See the “line” radiation spectra of ionized atoms an equation below.
h – Planck. const, E – energy, f – frequency. Planck’s constant h was derived from accurate measurements of the radiation of an absolutely black body, or real black specially shaped cavity. This cavity was heated to a certain temperature at which it emitted certain radiation at a frequency f. The higher the energy E, the higher the frequency f. But this energy as well as the frequency varied in multiples of some minimum amount always equal to h. In other words, the transition between frequencies was not continuous but discrete.
But the most important point is that Planck’s constant is not exactly constant but slightly varies. Its physical magnitude is taken from the peaks of the frequency range of radiation. See the line spectrum of the emitted radiation – they are not exactly discrete lines but sharp Gaussian curves. See a Figure below. There are no frequencies but wavelengts. The meaning is the same.
If we increase the amount of energy at a constant temperature the blackbody radiation will be more intense, but at the same frequency. And vice versa, more intense radiation of a given frequency will not knock an electron out of the metal, but if we increase the frequency of the incoming radiation on the metal, electrons will be knocked out of the atomic orbits – See the photoelectric effect (explained by Albert Einstein).
Indivisible quanta result in recognizable “stable” structures. Without quanta, the existence of chemical elements is impossible. Chemical elements – different atoms combined into the beauty of all nature.
Very interesting topic – to be continued next time
Go back to base units. If we measure in the world, we measure only differences. We choose a clearly identifiable shape as the base unit. The size of the base unit is exactly 1. See a Figure below
When we measure differences in the world, we measure the difference between the chosen base unit 1 and the measured shape A or B, see from the upper Figure. We do not have external units, constant ones independent of the world and its changes. We only have internal units that change and interact with each other. When we measure differences we influence both the measured and the measuring value. We have no external units available. If we can use the outside as our critical thinking.
See next Figure below
There’s a specific area. In the first case, a regular square coordinate grid. In the second case, a deformed coordinate grid. But we can only see the degree of deformation from the outside. The participant in the deformed two-dimensional space has no way of knowing the rate of deformation. So, local change are important. Local changes in relation to the local scale.
Topology of variable shapes and structures (continuous or discrete, general or local with their interactions).
See a list of pdf files below. There are topics (considerations and suggestions) on the subject of this section here.
Measurement | ||||
Base_units_preview | Base unit of base units | Density | ||
Base_units_2 | Base_units_3 | Base_units_summary | Units_time_space_matter |
The post Base units, their origin and meaning appeared first on ROMANVS Roman Mojzis.
]]>What is mathematics? Counting. Counting what? Counting what is distinguishable to our perception. Distinguishability over time – distinguishable objects, processes or feelings must have a certain duration. See e.g. biological species (difference between Devonian nature and present nature), if biological species were changing rapidly we would not count, name, evaluate them. See thermal motion of molecules or quantum fluctuations – see images below, there are no stabilized shapes
Nothing to hold on, no reference point. There is no form to abstract. How to determine the reference point, how to determine the structures inside such fluctuations? Roughly written – what in the above images will we calculate, what will we distinguish? Hard to differentiate in the above environment. Everything is changing, bubbling. So there is the first condition – stabilized forms, shapes or structures. See below
There are three clearly recognizable shapes-peaks. The other three shapes are at the limit of distinguishability. They either evolve into orderly shapes or disappear as part of a chaotic background. It is quite hard to keep stabilized shapes-peaks in chaotic environment. Why count something that can’t be counted? What doesn’t come from clearly distinguishable shapes.
Let´s see the logic – there are three shapes (see upper image) A=1, B=2 and C=3 – we assume by logic if A<B and B<C then A<C. That looks right. But we forget about the course of shapes and the measurement process, and also the condition of the simultaneity. Firstly, the shapes can change during the measurement, secondly, we can measure out of the peak of the shape, and secondly, it depends on the observer where two events or shapes looking simultaneous may not be simultaneous. It is better to write in probabilistic form if P(A)<P(B) and simultaneously P(B)<P(C) then P(A)<P(C)
No matter what the shapes are, they must be stabilized for a certain time period. Only then does language begin – sorting shapes by their common features and naming self-similar shapes. Then the mathematics begins, see set theory – identify the set of given shapes and classify corresponding names. Then can we count the number of given shapes. See below
Mathematics needs differentiable shapes, structures and processes. Even more, mathematics needs stabilized differentiability. Clearly defined differences over time. Without stabilized differentiability (distinguishability) there is no mathematics, no language, no science. We perceive the world around us and inside us by our five senses. We distinguish different shapes and objects in Nature, the intensity of feelings and much more. We express these distinguishable shapes, structures and feelings in some way. And we do this in three forms – gestures, voices and images. Gestures have remained and language has been developed from the voices. And from images, writing has evolved over time. We get names – by voice or in writing. Names of self-similar objects, shapes, structures and feelings. Self-similarity – the basis of abstraction. To abstract from details to common characteristics. See the difference between an apple and a pear (though every apple and pear is a non-repeatable original). After that came mathematics. The next level of abstract thinking. To determine the quantity of objects, shapes, structures or “feelings”.
The problem with mathematics, like language, is that it calculates, distinguishes only and only existing objects, shapes, structures, processes or feelings. Roughly speaking, it is behind development. Behind the creative abilities. Mathematics, like language, does not calculate, does not name ideas, that which arises, that which is born in the mind, or that which is realized. Whether new objects, shapes, structures, processes – we call all this craft, artistic or scientific activity. Mathematics, like language, cannot tell us what of the “pile” of ideas and intentions will be useful in the future. For this, there must be experience. Experience that cannot be calculated primarily. An experience whose results can then be counted, analyzed, evaluated, predicted over a period of time before an unexpected event, idea, trend, process, object, etc could appear. In short, there is a new event, a previously unsuspected or only slightly suspected event, previously marginalised by abstraction. This event then greatly disrupts or cancels all the most complex calculations, evaluations, analyses and predictions.
We are at the meaning of mathematics – to lead us through our experience to ever greater or gently perception, to distinguish the previously indistinguishable relationships and not to settle for the currently distinguishable, perceptible and calculable forms aor events. E.g.: How to find out what will be created? That which will be created in differentiated forms and therefore nameable and therefore countable. After that we can do mathematics with all its more complex processes.
See for example Combinatorics – we have x points and we want to arrange them in different ways. But if the points are ideal – that is, indistinguishable from each other, we have nothing to hold on to, nothing to combine. See below
When we change the position of the ideal points, we don’t know whether the change has occurred or not. We can only detect a change of position for points that are different from each other. See below
There are 24 couloured points. But some points have identical colours. We can’t then distinguish them. See also snowflakes. How to arrange them? According to what? All snowflakes have a hexagonal structure. But if we look closely at each snowflake with a microscope, no two snowflakes in the world are exactly alike. Certainly flakes can be divided by certain features of the appearance of common basic shapes on the spines of the hexagonal structure.
And here we are at the resolution – the resolving power. And resolving power is connected with the influence of the observed object.
Mathematics is a special kind of ignorance. To equal (summarize) events and entities which are not equal to each other. To give a name to sets (grouping elements). After that there are the number of elements of every set. To count 20 flowers is OK. But to explain the origin of flowers by dividing their parts without observations is impossible. We only get the indescribable rest like chaotic quantum field. The end of our knowledge. But that´s not truth! It is the beginning of next knowledge – the experience with invisible but feelingable movements (desire, intentions).
Let’s do away with math when everything is unique and unrepeatable? Then mathematics is meaningless? But whatever! Mathematics will continue to be used. I repeat use it in accordance with observation and verifiable models. For there is no better servant, evaluating and directing servant than mathematics. Yes everything is unique, even in the smallest way that can rapidly change. But it is for evaluation and reflection that mathematics serves us. Reality is more than mathematics and that must be respected.
Certainly existence of irrational numbers – such numbers are the expression of the origin of all numbers. They are abstract from Reality – ideal point, ideal line, etc. Ideal whole number one, two, etc. with noughts until infinity before and after the decimal point.
The Logic of of Reality is different from the so-called ideal logic. Just as Reality is different from ideal so-called pure mathematics. If a is greater than b and b is greater than c then a is greater than c. This is true in ideal logic, but in Reality the ratios differ not only in content but also in form – current events.
Stabilized formations – the beginning of natural processes and beauties. Stabilized formations – the beginning of mathematics, logic and all kinds of sciences, crafts, arts and technics.
Nothing against abstraction, quite the opposite. Without abstract grouping of events according to common marks, without this abstraction, one can go no further, to have a knowledge. But it is also possible to lose one’s way in a world of unreality and vain fictions if we forget the source of abstracted events. These can lead to idealization, to idolization of limited abstract thinking with all problems for humans society including Nature at all. People are blind at two extremes. The first extreme is the blindness of chaos, the impossibility of abstraction. To be carried along by chaotic behavior. The second extreme is overuse of abstract thinking, without respect to the origin of abstraction, without respect to the whole, to the whole interrelationship among abstract processes that interact with each other along with their origin. It does mean reciprocity among free and grouped events.
Just abstraction directs us to Reciprocity always different and more.
Imagine the closed box with two horizontally spaced walls. The ball will bounce repeatedly between them See below
The measured position of the ball is in the middle. The direction to the right shows the future and the direction to the left shows the past. It is clear from the image that the position of the ball, or the uncertainty of its position, increases both to the right (future) and to the left (past). The degree of uncertainty depends on many conditions – the shape and quality of the ball and walls, their mechanical properties, etc.
There are three possibillities
1) in the past – everything is hidden in chaotic behaviour
2) in the future – everything is hidden in chaotic behaviour also
3) the actual state, with some uncertainty in the future or into the past
But this process (the ball between the walls) is not alone in the world. When there is no outside influence on a given process, we are able to predict the immediate future of the process very accurately and to make a rough prediction of the more distant future, but long-term predictions or estimates of the distant past are completely out. Not only because of the limited accuracy of the model, but mainly because of the influences of surrounding unexpected events.
W
If we are able to count results of a creativity desire (intention) then we are invited to feel and value the intensity of the creating desire (intention) in relation to the given surrounding conditions.
How to count objects before they are created. How to count, how to calculate the intensity of the desire to create something – e.g. to create a pot or a music song? Objects can only be counted after they are created. The question remains the transition state – from idea to implementation and successful testing. When to start thinking about practical application. The power of faith, experience, determination? Hard to evaluate this mathematically. And that’s the point! What is needed here is personal experience, not the experience of others, but personal own (sometimes painful) experience, which of course reveals fantastic possibilities.
How many different curves can there be in the world. How to determine the degree of their differentiation. What will be the standard of difference. Who is going to judge?
How many mathematical curves can there be, describable, compressible curves like y=f(x), where y is a dependent variable that depends by a functional prescription f(x) on the independent variable x. See sin(x), tg(x), log(x), … , sin(2x) + log(3x), … , etc. Are there more indescribable curves than mathematically describable ones, perhaps by the most complicated rules? And what about the complexity of the prescription? Wouldn’t it be better to give a table of values for y and x in a given interval?
See the laws of physics or the mathematical theory of sets using the concept of a set as a well-defined differentiable objects. See a pottery – the pot is or the pot is not. A moment ago the pot was not, now it is, and after a while it will not be again. It is the same in biology, in the beginning there is no apple, after a while there is an germ, then a growing apple, then a fully grown apple, and finally there is no apple again.
Generalized – in the beginning there is nothing, virtually chaotic quantum foam, which looks like nothing, after a while a part of the quantum foam inflates like a balloon and further expands and inside the expanding universe are created particles, stars, chemical elements, minerals and rocks, also the Earth is created with all apples, organisms and humans, who want to know the meaning of life.
To realize this in the world – there must be for some time worldly fixed forms and structures incl. processes among them. Shapes, structures and objects must be stabilized for some time – so called unchanging or quasichanging shapes and structures – see the evolution of biological species throughout the history of the Earth. There were no apples in the primordials. But there were trilobites, graptolites, etc. But I don’t see them around anymore. Sorry, I can see them, but only their fossils.
How to count the conceptual (existing only in potter´s mind) designs of the pots or pots that are not yet finished or those that are on the potter’s wheel or those that are being broken? And this is the way with everything in nature.
It´s not enough to count visible or perceptible or observable things and elements in the world but incomparably more is the ability (the experience) to create perceptible and observable things and processes like inventions, artworks, music songs, technical solutions etc.
Then comes mathematics (science at all), which needs differentiable structures, shapes – elements of sets. In simply words, first we need to have pots and then we can count them or create volume dependence on the diameter of the pots and much more.
In short, mathematics is useful for describing and classifying visible, perceptible, distinguishable structures. It is good to be familiar with the limits of mathematics. Mathematics cannot express what arises, what the unexpected arises. This is a surprise even for mathematicians. Certainly after a new object arises, mathematicians can describe its characteristics and classify the newly arisen object into the existing classification of objects. Further, mathematicians (mathematics alone cannot do this) can determine the object’s future evolution with some degree of probability based on observations of the object’s changes. But that’s all. Mathematics is not almighty. It doesn’t create new, it doesn’t destroy old, it just observes, describes and predicts.
In other words – to practice mathematics requires a stable distinguishable shapes, structures, processes and changes among them in the world for some time.
How to describe changes in shapes and structures? The described shapes and structures must be stable for some time. See counting in mathematics 1+1=2 or 2+3=5. For this we need stability and distinguishability of structures. E.g. biological species. A ladybird is a ladybird for as long as it is. So we have time to describe it. And if we don’t, another ladybird is born, and it goes on like this for thousands of years. But trilobites, for example, have been on the earth for about a hundred million years, but they’re no more. The same story will be valid for the appearance of ladybirds in the world. But let us cheer ourselves up with the thought that instead of ladybirds there will be another species of beetle, which our descendants will again have time to describe.
Back to descriptions of shapes and structures. We know that these must not change so often before we can describe them. In other words, shapes must not change faster than our ability to describe them. There is a slip between the change of the described shape and the change of the describer (usually a human, sometimes an automat). Description of slow changes by other faster changes. Both change, but the described changes are slower than the describing changes. If there were no noticeable difference, it would be impossible to describe.
What is the purpose of this? The purpose is to have growing experience through our perception of what is happening around us and why it’s happening. For which mathematics is the best tool. In short, to stimulate our senses with the use of mathematics. Mathematics cannot replace our way of learning knowledge and experience.
Mathematics, like computers, is the best and most wonderful tool ever created for learning about the real world. There is no better helper, no better tool than mathematics together with computers. But on the other hand, there is no worse master than mathematics with computers. A good servant, the best servant, but the worst master. The master was, is and should remain a human being.
See the mathematics part – combinatorics is closely linked to the number and distribution of otherwise indistinguishable points. As a substitute for indistinguishable points, we can mention electrons, we also cannot differentiate between them. More points or electrons does not mean that all possibilities could be realized. See the entropy from 2nd law of thermodynamics. There must be some initial differences in energy levels.
Whatever originated in the world can’t be divided into exactly the same parts. Whatever came into being is the original, whatever structure is unique. Unique is a whole that includes all subsets, each of which is unique. It is not possible to construct two equal segments and then connect them at exactly twice the length and it is equally impossible to split one segment into two equal halves. Splitting the continuum requires irrationality.
The equation 1 + 1 = 2 has no meaning in the real world. There are no the same shapes, structures, elements, subjects or anything else. Yes, we use this equation like model. But we must know this is only the model, our approximation.
In the same way we could pretend the ideal straight line if we see a very fine piece of polished metal surface. The same with the ideal ball if we see very pretty polished balls into gears. If we go closer then we see something like mountainous landscape – surface roughness profile that vibrates in response to the thermal movement of molecules and atoms. If we go more closer then we are able to see foggy appearance of particles as excitations of quantum field.
See also long-distance action. There has to be physical contact of the bodies. That’s why once upon a time centuries ago gravity was hard to understand, and later electrical and magnetic forces. How it is possible to act through empty space without direct physical contact of the bodies. Much later, we recognized that the contact of the bodies itself also took place at a distance — Pauli’s exclusion principle applicable to electrons in atomic shells. When we take a closer look at the contact of the bodies, we see only and only the deformation of the electromagnetic fields of atomic shells. Like approaching two magnets with the same pole facing each other.
How to distinguish random events? How to distinguish a random number series from a non-random number series? After all, each number series is different from the other if they are not identical. We have to establish some kind of regularity. Something that repeats despite the chaotic background. Something that has a pattern relative to the length of the number series. So it also depends on the length of the number series. Who’s selecting what to distinguish? Man or machine?
Visibly distinguishable, perceptually distinguishable patterns. How in random states or events to start with distinguishability. But after all, all random events are distinguishable – one from the other. But the frequency of occurrence of distinguishable features gives a straight line – there is no preferred state, shape or process. Randomness cannot be determined in this way, however, a regular series with a clear regularity will have an equally frequent occurrence of its elements. Thus, testing for randomness consists in using higher perception, in finding order. Meaningful order (the question is what is meaningful). No matter where we observe, there is randomness everywhere – unpredictability. Everywhere? But even random events (chaos) need a framework, a limitation.
How to distinguish the indistinguishable, the originally invisible, if we do not want to distinguish the distinguishable with honesty.
See musical compositions – series of musical notes (cdefgahc) – write down a composition (or part of a composition) in numerical series and evaluate the randomness, usually there are repetitions, but there can be compositions numerically random but still beautiful. See coding with random numbers – we must have a series of random numbers. So add randomness to a piece of music?
Mathematics is the result of differentiation. Mathematics is compressible. But nature is not compressible. However, from a single quantum, or even deeper, from nothingness, the entire universe can arise, with all its different, distinguishable structures. But even so, mathematics cannot yet come into being. For each structure is different, distinguished from one another. they have no common denominator, no similarity to one another. Everything would be 1, 1, 1, 1, … etc. So the necessary next condition is the appearance of self-similar structures. See elementary particles, elements or biological species. Each individual may be an unrepeatable original, but they have a common denominator, a basic characteristic. They are grouped into sets. With sets the language began to exist – to name sets. After that mathematics come into being. The common denominator abstracts to an ideal form. In other words, everything that looks like a circle is idealized into an ideal circle. Even though every real circle is different from each other. And it does so within a given interval – say, with respect to the basic building blocks. How to distinguish a circle from an ellipse? Where does a circle end and an ellipse begin? Does it make sense to ask this question? Rather, ask why self-similar structures exist in nature in an ocean of chaotic structures.
Even more differently, mathematics is a subset of the real world. Not all continuous functions are differentiable. And not all beautiful shapes are mathematically describable. Mathematics has no chance at all in describing, let alone predicting, random, unpredictable structures.
a brief recapitulation:
Aristotle – limited motion of bodies
Newton – unlimited motion of bodies
Einstein – „limited“ motion of bodies in spacetime
The history of extrapolation. Extrapolation from observations of natural processes at a given level of natural processes.
Newton’s first law: A matter body remains at rest or in motion at constant linear velocity unless an external force acts on it.
The law of Inertia. The result of the first law of motion is an escape velocity: Every body accelerated to the escape velocity will move away until infinity.
According to physical textbooks, the Newton first law of motion is an brilliant extrapolation of our experience.
Every body remains at rest or in straight line motion unless it is caused to change its state by external forces. In the real world, where there are always some forces (resistive, frictional, … , gravitational), the body will stop after some time or will stop at infinity in the case of an escape velocity. In other words, the so-called ideal case is not possible in the world. It is possible to reduce the forces resisting motion, but not to cancel them. However, even in the so-called ideal state, the definition (without external forces) is meaningless. There is an contradiction in the case of the ideal state. The misunderstanding is at the very beginning – the level of observable natural processes.
It follows from the very nature of a material body – for wherever there is a body or bodies, there are always external forces (gravity – deformation of spacetime). There is no possible state of having material bodies without external forces. The external (gravitational) forces are the results of the deformation of spacetime and they are related to every material body.
As we know today, bodies are only in the universe along with time and space. There are also forces interacting with each other body as the result of space and time – see gravitational force. For there is no space or time without matter and vice versa. There is no matter, no body without space and time.
And so the meaning of the Newton 1st law of motion in the light of space-time bound to matter (bodies) does not make sense also for so-called the ideal state. There will always be a gravitational force acting on a body, e.g. in intergalactic space, and such force always influences the body. The material body itself is an indivisible part of space and time. The material body as we perceive and measure it is an excitation of the omnipresent quantum field.
Thus the ideal state without all acceleration no forces would act on the body, is impossible. The nature of the contradiction is in the very beginning. That it is a body. If it’s not a body, that’s different. But we don’t know and can’t describe the motions of non-bodies. Where there are bodies, there is always force action and the impossibility of movement without limits. It follows necessarily from the nature of bodies. Even if there is one single body, it will necessarily gravitationally affect itself in motion.
There always be forces among bodies. Yes, in a case of an escape velocity, there is no chance to stop accellerated body, but such body will be still influenced by gravitational field of the first body and, no doubt, self-gravitational field.
There is another point of view. The body is the source of a gravity force. Such force curves the spacetime.
Bodies, by the very nature of bodies, cannot move freely through spacetime without forces.
There is an equation – the body means gravitational forces and gravitational forces mean the body or bodies. Without bodies there are no gravitational forces and vice verse. Without gravitational forces there are no bodies, either.
Let’s imagine a thought experiment – there are only two bodies in the universe, that have been accelerated to escape velocity from each other – so they will move away to infinity, as we calculated from the equations of classical mechanics.
But the reality in light of Einstein’s equations of general relativity is different – see the following
Two bodies accelerated from each other to the level of the escape velocity do not mowe away until infinity, but these two bodies will deform spacetime. The conclusion – these two bodies with escape velocity do not move away, but they will follow the main curve of spacetime. In an ideal state the main curve will be the main circle of an sphere. The result – two accelerated bodies will meet after a long time. The time is given by (growing) mass of two bodies and their escape velocity from each other.
This is not a negation of Newton’s law, but a specification of it by knowledge that Newton did not have in his time. In other words – every model, every theory, is valid for a certain range of natural processes, and it is impossible to establish a universal formula. Especially if we don´t know every process in Nature, plus every process is unique – there are only common characteristic – like appearance of hexagons in snowflakes.
The age-old human tempation of applying abstract models or ideas from a given level of natural processes to the whole of nature or the universe and then being surprised that nature or the universe does not work according to them.
However, models are good, proven models. Yes, they are good and repeatedly tested, but at a given level, at a given scale of natural processes. Moreover, a model is a model – i.e. a simplified, abstracted description – so it has limited validity even at the original level from which the model was created.
Go back to bodies:
The bodies have to be together in spacetime (spacetimematter). There is no chance for matter bodies to escape from each other to infinity. There is an exception if there is external force (power) above all bodies. Sometimes called dark energy.
Conclusion:
The greater the mass of the bodies and the greater their escape velocity, the greater the deformation of spacetime and the smaller the radius of the main circle and the shorter the time for the bodies to meet. And vice versa – the smaller the mass of the bodies and the smaller their escape velocity, the smaller the deformation of spacetime and the longer the time it takes for the bodies to meet.
In a rough analogy, it is like sailors on two ships in an earthly ocean, drifting apart into “infinity” until they meet on the other side of the globe.
Even light cannot leave this spacetime (spacetimefield). Just as light cannot leave our universe. Light (electromagnetic waves) is given by the properties of spacetime. It doesn’t matter if we call such spacetime the elementary quantum field or the ether.
Go back to simple mathematics. We have a base coordinate system. E.g. Polar coordinate system – there is a central point with N lines passing through it and circles around it. This coordinate system still looks the same whether we scale it up or down. To know the scale – we need to introduce another point. See below
And only now do we have the basis of the polar coordinate system. This basis is a clearly given by line (meridian) that passes through the two points that mark the section – the base unit for length, position and direction. This next point must defined properly. In relation to the observed surrounding reality. Only infinitesimal distance dL is not enough.
It is impossible to describe the processes inside one point (zero size) using the derived units from the existence of another point and the space between – the base distance. Then we can get into contradictions. See the singularity of the universe, the so-called black holes or the singularity of the Earth’s poles. In the case of the poles, the situation is simple. If we are standing on a pole, it is hard to give a direction to go. Surely it is enough to give the meridian, but it is indistinguishable at the pole. So we have to move aside far enough to distinguish which way to go. So here the resolution of two points – the point of the present position and the point of the pole (north or south pole) – is what matters.
to be continued …
Let’s go on with the simple mathematics – especially the much quoted Gaussian density function. See below
Such function is given by a very simple formula y = f(x), see below
where instead of e (base of natural logarithms) is the number 2
There are infinite possibilities how to modify this function into very interesting waveforms – See the waveform of a wave packet – e.g. a photon. See below
The frequency has a sinusoidal shape. But the ” bound” sinusoid has the shape of a probability Gaussian curve of a normal frequency distribution. The sine wave is modulated by the Gaussian curve. The result – the “bound” frequency of appearance of a particle – wave particle – wave clump.
See curves below – dependence of the probability density on the distance from the nucleus of the hydrogen atom.
There are clearly visible peaks of curves that determine the wavelength of the emitted radiation. But we also see that the peaks are not discrete, but continuous. They are an integral part of the curves. In other words – the wavelength of the emitted radiation (when the electron jumps) changes.
Go on. Imagine a continuous and smooth line. See Fig. below
This line is not necessarily smooth and continuous when we “look” at it more closely. This line may consist of many probability appearances – see vacuum field excitation. rom a quantum foam can be created anything From a quantum foam can be created anything. See next Fig. below
We can count the peaks with some resolving power. Count the peaks of given curves. Curves of the probability density of vacuum excitations. However, these curve with their peaks must be stabilized for some time. Stabilization may consist in the regular renewal of the curves with their peaks.
Mathematics cannot explain Reality, Nature in its diversity. However, mathematics is a product of abstract thinking, a simplification of observed processes. These processes, even if they last for a certain period of time, can disappear or be influenced by other processes of which we have no idea.
Maths is a product of grouped random events, not their source.
Let’s imagine two series of numbers. One series is predictable numbers – e.g. x+1 or the function sin(x) or log(x) etc. The other series of numbers are numbers that are unpredictable, i.e. random. It is impossible to find any pattern of their repetition. Now let’s do the sum of these two series. What do we get?
‘1, 2, 3, 4, 5, … , etc. and 0.237, 0.986, 0.011, 0.455, 0.671, … , etc
After summing the two series, we get a new series
1.237, 2.986, 3.011, 4.455, 5.671, … , etc.
The integers retain their pattern, but their decimal progression is random. The situation changes when we add the numbers in the third decimal place to the random numbers.
0.001, 0.002, 0.003, 0.004, 0.005, … , etc. and 0.237, 0.986, 0.011, 0.455, 0.671, … , etc
After summing the two series, we get a new series
1.238, 2.988, 3.014, 4.459, 5.676, … , etc. How to extract the originally added numbers with simple progression 0.001, 0.002, 0.003, … , etc. We will not be able to distinguish the added numbers – their predictable progression. In other words – we get a purely random series again. It is impossible to extract from such random series any meaningful series back, or we can fabricate any series we wish. The real predictable series completely dissipate in a mix of random numbers, unless we know the predictable series or the random number series. And here we are at entropy. Anyway, this is the principle of an unbreakable cipher with encryption using a series of random numbers. The advantage of a cipher is unbreakability. Disadvantage of the cipher – to transmit a series of random numbers to each other and not to repeatedly use this series.
Go on. Let’s imagine again two relative series of numbers. One remains, the integers from 1, 2, 3, … , etc. The other series will be random numbers with variable values, or with different positions of the decimal point.
1, 2, 3, 4, 5, … , etc. and 0.237, 9.860, 0.011, 45.500, 0.671 … , etc
After summing the two series, we get a new series
1.237, 11.986, 3.011, 49.500, 5.671 , … , etc. We can see at a first sight that every second number, or its integer part, is an odd number. Now imagine that a series of random numbers contains a randomly changing position of the decimal point. In short, after adding the two series, we get longer or shorter series where we determine the added numbers with a series where we are unable to determine added numbers.
And that brings us to the noise. Noise that influences the transmitted signal. As well as the noise of chaotic events that will influence our abstract models in the future, or vice versa, how certain we can be of what’s happened in the past.
Every theory, every model is valid in some range – see an upper image. The more general the theory the greater the range of validity with little accurate results. The more detailed the theory the narrower the range of validity with very accurate results.
Imagine a number series of random numbers. A series without end. As the series progresses, we can discover certain patterns. Not just an ordered sequence of numbers, but perhaps functional relationship. See following number series
…..804854251355393280854916365179028116820495388064589809751234321012343210190812402118072221120937557584523297371065…..
There is a clearly identifiable pattern in the series – red marked. This random series, with a little imagination, represents random fluctuations of the quantum field. The fluctuations are completely random, unpredictable and they are called zero vacuum fluctuations. In such an environment there are excited states of the probability of the appearance of e.g. an electron in an atomic shell. Is there another fundamental question? How to distinguish random structures from non-random ones? Or how does one non-random structure in a chaotic ocean distinguish other non-random structures? For there is no outside observer. How to measure the degree of randomness or the degree of order? Use the selected standard non-random structure? On the one hand, tight order, on the other hand, randomness of chaos.
See next random numbers series below
…….59528890163938123332114194511344217155134146095204761012343214612343222274101234320715686828980012343213189412243210186055872687666887359546223433138123432195216864838………
We see groups of ordered numbers 1234321 that occur repeatedly. Let’s look closer and we see that our ordered numbers contain changes in them. For example, instead of 1234321, we see 1224321, etc.
Waves or particles? Hardly to write. There is always different situation. Foggy blurred background of quantum field. A foggy blurred background from which appear and disappear and are renewed wave-particle shapes. Always different – an indefinable grouped of chaotic events into shapes and structures that appear to us as continuous. We have a limited experience that confuses us – shapes, spheres …. but we too are given by grouped chaotic events. But we have perception.
Yes, when we look closely to the structures and shapes we recognize indefinable, indescribable chaotic events that are grouped into familier forms, shapes – appearances of particles, these further into atoms, molecules, minerals up to living organisms.
All these grouped events into shapes and structures are always different, indefinable in their details. Sure, there is an abstract framework, but even that is subject to changes. Either internal changes or external ones.
Lawfulness, regularity or periodicity in a set of data is more likely to be found by those people who expect it and know how to look for it, while the most precise observational data alone does not lead to a deeper understanding.
But periodicity or regularities are not perfect, better said, they are not eternal – in short, they are valid for a limited time. Even if they are valid for a very long time, like the orbit of the planets around the sun, that does not mean that the stability of the orbits of the planets will be the same for millions of years to come. There are other laws – like the motion of comets or nuclear forces – that can unexpectedly disrupt the stability of planetary orbits. Not to mention the basic fact that all periodicities and regularities are “grafted” onto a purely chaotic environment. Thus, the most precisely measured “constant” slightly change in the “rhythm” of vacuum fluctuations . So what is the point? Is it about observation or is it about abstract thinking? It is both! To learn how to abstract what is significant at a given time, at the current time. And at the same time to know the general regularities (laws) that can affect the most stable natural processes. That is, not only to think, but to give place to other observations – perhaps in the field of comets or nuclear forces. All this is called Experience, fine and firm (decisive) perception.
Another way of looking at the random series. This time in graphical form. How to estimate the source of a line or curve? Better written – what are the curves that make up the resulting line or curve? See below
But a straight line can turn into a curve with a different course. See below
The upper picture is closely related to the resolving power – how to distinguish marked differences in the course of a given curve.
See uppper image – there is a course of Continuity – unexpressible course, it is impossible to make a formula, to mathematically describe that course in its wholle. Yes, we can more or less precisely to desribe parts of that course, but only parts and it is impossible to predict the future course or to estimate passed course. Remember the number series mentioned above with frame regularities. But such regularities appears to be regularly – they appear regular but in fact this is the result of our simplification in current state.
We can verify by observation only current state.
to be continued ….
See below – there are articels on the subject of this section. If you want, download the pdf file.
See below – there are unsorted brief remarks waiting for word processing to articels
The post The nature and limits of mathematics appeared first on ROMANVS Roman Mojzis.
]]>For our world the following sentence is valid – The more gentleness the more beauties. Quite a interesting event in nature, such incredible gentleness and beauty (see flowers, butterfly wings) is superior to such rough reality. Everywhere in nature we see, we perceive a fine balance that controls the most violent worldly forces – let us call them, for example, the four fundamental forces. Very roughly and violent processes at micro or macrolevel. Microlevel means thermal movements or quantum fluctuations. Macro level mean processes in the universe. See two images below – the left illustrate quantum field and the right illustrate metagalaxies.
In contrast to the immense gentleness about in the middle – see organic compounds, structures and bonds of organisms, especially artworks, musis, craftworks and inventions of human beings. See next image below
The appearence of beauty and functionality in the world – we get a curve of the distribution of beauty in the universe as a function of magnification with a maximum in the middle. And with two minima on each side of the curve. See an image below.
See also artworks. A work of art such as a painting or a sculpture must be viewed at a certain distance. It is difficult to judge details with a microscope or magnifying lens. Or, on the contrary, too be distant. A certain distance is needed, which depends on the size of the artwork. Certainly under a magnifying lens one can see interesting and beautiful details, but under a microscope the interesting and beautiful details will decrease. And when we submit the artwork to structural analysis, that’s the end of the beauty. Conversely, if we move too far away from the artwork, we can perceive the whole, but without the details that give the work its beauty and freshness. And when we move away from the work for tens if not hundreds of metres, we no longer see anything, the beauty has disappeared, our eye can no longer distinguish it. And so it is with the nature around us.
The machine is not only functional but also beautiful. Beauty is the expression of well-balanced functionality. Functionality not only to the machine, but also to its immediate surroundings. See following image
From observed experience we know that beauty, functionality, and mutual harmony occur in transitional states, areas of transition between pure order and pure chaos. Most beauty is contained in the transition between laminar and turbulent flow. See the beautiful curves when the flow of the medium changes from laminar to turbulent. See an image below
The most beautiful shapes and structures between laminar and turbulent state. This beauty disappears in the chaos of turbulence or in the tightness of orderliness. See the ugly disorder of chaos versus the cold and boring beauty of ideally spaced crystals.
Certainly the chaos, the chaotic appearances are different every time, unexpected and always ugly, disorderly. Certainly orderliness is expected, predictable, but boring and cold, its “beauty” as cold as ice. But transition state – always new, unexpected and beautiful or functional. But when is that moment of creation of transient state? Better expressed, when in space-time appears that impression of beauty, of functionality?
The artwork can only be viewed after it has been made. It is hard to judge the right moment of creation, painting, formation what it will be and how beautiful and stimulating it will be.
True Functionality is connected with Beauty. All the beauties of the world together with the possibilities to form matter are given by atomic physics – more precisely by the intolerance of bound electrons to occupy the same shells in the atomic orbits – see Pauli’s exclusion principle.
The base of matter – atoms. Very stable grouped sets of different excitations of omnipresent quantum field.
Let’s go back from nuclear physics to atomic physics. The envelopes (shells or orbitals) of atomic nuclei of the periodic table appear incomplete, except for the noble gases. The origin of the incompleteness is the electron intolerance formulated by W. Pauli as the exclusion principle. See below part of the periodic table
Incomplete atoms have a terrific tendency to fill their orbitals to full electron count. This is the origin of the chemical bonding of atoms into myriads of different molecules. Especially carbohydrates. E.g. HF, H_{2}O, H_{3}N (NH_{3}), H_{4}C (CH_{4}), … , (C_{6}H_{12}O_{8})n, …
CaF_{2}, … , etc.
The same incompleteness of atoms we can observe in flowers – stamens and pistils, which must be joined by external forces in order to preserve the biological species.
Along with the possible combinations of atoms into molecules, discretization decreases and continuity increases. Such fragile and gentle structures like artworks, like music songs, like flowers, etc, in short such gently structures are modulated at the chemical bonds thanks to the delicacy of the valence electrons placed around such a rough and brutal base as the atomic nucleus is. Comparison – it is the same as with sculpture clay. Such clay has been prepared from rocks by crushing, pulverising and floating it – the result is plasticised fine clay for sculptors and ceramists.
By the way – see quantum mechanics in prehistoric ages. Especially metal treatment by thermal process. The ancients found it very strange when they heated a metal it would first not shine, then appear dark red, then red, then orange to yellow colour and finally almost white. However, the colour of the radiation should have been the same! Only changing in intensity from the slightest shining to a blinding shining.
What’s the point of all this? All the “incomplete” atomic orbitals, all the samples whatever there are in nature. What is the purpose of knowledge whose current state is the quantum field theory – field properties of matter and radiation. The “peak” is the wave equation starting with Erwin Schrödinger. There is a new base unit – Planck constant as a speed of light. How to prove, how to verify that Planck’s constant is always the same throughout the history of the universe? After all, in physics we measure only and only ratios – the ratio of the measured quantity to the base unit. How do you detect a change in the base unit? Yes, in terms of the basis of knowledge, there have been no new discoveries, no new fundamental insights. It’s still just a reworking of the procedures for solving the wave equation and applying it to other systems and problems – certainly quantum computers, certainly the discovery of gravitons, quantum chemistry and many many more. But basically it is just a confirmation of physics ideas that are 70 years old and more – gravitons, antimatter, quantum fields with zero vacuum fluctuations – with excited particle-waves that repeatedly disappear and appear again. It is still just a reworking of the wave equation solution procedures and its application to other systems and problems, still just a calculation of quantum states, e.g., standing wave states associated with particle-waves. What is remaining? A unified field theory? Reducing quantum mechanics to the simple statistical appearance of particles, impulses and energies is hardly possible anymore. What then? To find a universally valid formula for ever-changing conditions? Reality still surprises us, it is impossible to write a universal formula. A universal formula for chaotic events or a universal classification of biological species. However, the difference between the biology of the Palaeozoic and the current Quaternary is fatal. The answer is probably for each person to decide for himself in what situation he finds himself.
A short note on the origin of waves, particles, atoms, chemical bonds and thus molecules. Let’s have an ice crystal. When we heat this crystal, we get water or water vapor. And when we cool the water, we get the original ice crystal again. That sounds logical. We even know why it happens – chemical bonds dependent on internal energy. But let’s think differently – at the beginning of the universe, according to known hypotheses, there were no ice-snowflake crystals. Where were hidden the laws of physics in promordial forms of our universe? Where have the kinds of chemical bonds (exclusion principle for electrons) been hidden? Where was hidden the diversity of chemical compounds and the diversity of biological species through geological epochs? Let’s go back to our ice crystal. Sometimes in the primordial forms of our universe the “condensation” of particle-waves must have started, then the fusion of atoms, but according to what laws? Where have all the laws been hidden? However, laws, rules and processes are frameworks – each species is an unrepeatable original – no two snowflakes in the world are exactly alike. Every physical process is slightly different at the quantum level. What have been the degrees of freedom in the beginning of the universe? In other words, what was randomly realized in a given framework, what “condensed” in the course of the cooling of the universe, already has a uniquely given path with all the reciprocal properties. That leads to the framework repeatability of physical and chemical processes as we know them today. If we accept this hypothesis then there may be other groupings, other chemical bonds and other biological species in different universes. The temporary conclusion is – random processes take place in a given frame and these are then repeated, again randomly, but in a sequence of given frames, which frames then gradually change. Randomness vs. necessity. But there is a contradiction. If the particles are roughly the same (or so we think), why do biological species change rapidly over geological epochs?
If we plant a pine seed it won’t grow into a maple tree and vice versa. This is the way the world works. Although pine trees were not in Devon and in the future… ?
Go back to the Beauty and Functionality in the world. Where was an artwork (in the same way every product) before it was created? In the beginning there was a movement of Wisdom (mind) inside the creative man, after that there was the artwork. What’s going to happen to the artwork through its entire existence? What happens to the artwork when it ceases out of existence? There will be new artwork better then last one. But creation ability is not possible to destroy.
Some people are so sensitive to feel hidden intention which will then be seen visibly in the world. The purpose of our life is to grow in this experience to perceive the invisible hidden intentions by verifying this perception in its visible results in the world. More about it in section Theology.
See abstract images and real images. Not only artworks but also pieces of engineering endeavours such as machinery, engines, clocks, turbines, etc. We are familiar with real images or real engines. Abstract ones are new to us. But like real images then abstract images could be beautifull. Their common attribute is a beauty. In the first case – the beauty in the real world, in the second case – the hidden beauty gradually becomes visible. It’s the same in engineering. The common attribute is a functionality. Real images for us could be abstract for others and vice versa. Consider the prehistoric fauna and flora of devon could be abstract for us. It´s the same in engineering – consider the first engines in the 19th century.
Current nature in the world with real images is only one expression of Beauty. One expression from many possible expressions, from infinity possible expressions. The common denominator of all images („real“ and „abstract“) is Beauty at any kind – shapes vs. colours. See also multiverse universes.
Engineering – interesting development of human beings called engineers. In the beginning, in prehistoric times, there was only ore, coal, limestone and fire. Throughout ancient times, engineers made metal tools, weapons and simple mechanisms (water wheels, screw pumps). In the Middle Ages, engineers built he first mechanisms such as mechanical clocks, weaving looms and so on. Not that mechanisms built or replicated themselves. And in modern times, we already have mechanical computing devices, typewriters, precise pendulum clocks, right down to high-precision atomic clocks.
All tools, devices, devices over the ages (from swords and knives to various mechanisms and processes to high-precision atomic clocks) did not happen randomly. They didn’t just happen by the way. But they were created by a big effort, a painful experience of a few people called engineers. The painful experience was replaced by the immense joy of a working invention or of putting an idea into practice. It doesn’t matter what you are, if you’re an engineer, scientist, artist, craftsman, carrier, economist, hairdresser, pilot, doctor, teacher or something else. One should be something – to reach the fullness of humanity. Live out the painful experience with following immense joy of the obtained abilities. And so still. In engineering, there is no absolute best solution to a given problem. There is a good solution and an even better solution. But no one can guarantee that the even better solution is the best.
The devices were built not by themselves, but by experienced and capable people. Watchmaking is a beautiful example. The accuracy of clocks increased through the ages to the current level of atomic clocks. These instruments did not self-construct, they were built by engineers. In fact, the machine with given accuracy produces another machine (even itself) with a given original accuracy – no more! A machine production with a given accuracy cannot produce, for example, more accuracy gears than the given accuracy of the machine.
At first glance, a mystery of increasing accuracy of products. How to achieve with a given accuracy of the production machines a much higher accuracy of next products?
to be continued …
to download following files
Download | Download | Download | Download |
Technical drawings | |||
The post Art, Craft, Science appeared first on ROMANVS Roman Mojzis.
]]>All mechanics, mechanical properties are also given by intolerance of electrons in their atomic shells. All material sciences with their elasticity, plasticity are given by exclusion principle.
Let´s go to interesting elasticity, especially spring elasticity
Here we have four examples of elastic behaviour in mechanics. We suppose absolutely rigid the ball, the wall and the construction. Reference system is outside – connected with “paper”. See images below.
The first example is clear – a ball with mass M1 moving with velocity vM1 against a fixed rigid wall with a spring of stiffness k.
The second example is similar to the first with the important difference that the fixed stationary wall has disappeared and instead there is a standing ball with mass M2.
In the third and fourth examples, a ball with mass M1 is enclosed in a rigid closed construction with mass M2
The third and fourth examples look the same, but are not the same. The difference between initial conditions.
In the third example, the ball with mass M1 has an initial velocity vM1, while the closed rigid construction with mass M2 is standing. Both left and right springs with the same stiffness k are free – not compressed.
In the fourth case, both the ball with mass M1 and the structure with mass M2 are initially standing. The right spring with stiffness k is maximally compressed. The left spring with stiffness k is free.
Next, we will solve the measurement of motion in the previously mentioned examples with respect to the reference system. This is in the case where we replace the photons with mechanical balls of mass m with velocity v. The problem is the so-called rest mass of the bodies. We know very well that there is no ideal rest system in the world. Neither is an ideal rest mass.
In the first case we have to solve the problem of ball impact to spring. To solve the duration of impact, the maximall compress of spring and, if we please, the progression of velocity and acceleration of the ball. Firstly I must write – to solve upper case is impossible in the real world. We must this case make easy – to simplify it. To take out the case from the world. To idealize the case. Ignore all irrelevant details. But the case is given by the world. This case can not exist without the world in the same way like organisms – tear off the flower and study it, especially its living processes. Nothing against simplification, but don´t remember it our model has its own limitations.
Initial conditions:
The ball with mass M with inital velocity v in the direction of the axis x of the spring, the weightless spring with a stiffness k, the fixed wall – it is also our reference point.
To solve only max. compress of the spring – it´s easy.
We use only two fromulas – first of them for kinetic energy Ek of the moving ball Ek=1/2Mv^2
the second one for potential energy Ep of compressed spring
Ep=1/2kx^2
from the law of conservation energy Ek = Ep after that we are able to solve the max. compress x max. of the spring.
the solve the duration of impact and the progress of velocity or deccelleration and accelleration we must use the classical mechanics consideration about the force between ball and the spring
the inertia force Fi = M*a
the “spring” force Fs = – k*x
these two forces are equall through the duration of the impact
Fi = Fs then M*a = – k*x or M*a + k*x = 0
we are able to solve the accelleration a of the ball at any distance x from the beginning of the impact – we must use numerical methods, in the other hand to solve uppper case analytically we must use differential equations – where m represents mass M of the ball, a – accelleration, x – distance, k – stiffness, t – time
The results: functional dependence of distance on time x = f(t), x = C cos Ω t, where Ω^2= k/m and Ω*t=2*π . From here we can calculate the duration t of the impact.
The angular velocity Ω can also be expressed in units of seconds. Instead of the unit radian for angle, we use 1 as the unit of angle. In that case 1 means 360 degrees or 2*π radians. Thus angle 90° (π/2) will be 1/4, then angle 180° (π) will be 1/2 and angle 270° (3/4*π) will be only 3/4 and finally the angle 360° (2*π) will be 1. So here we are at rounds per second. Generally cycles per seconds. A second, however, is defined as a certain number of wavelengths of selected radiation that serves as the basic unit of time. Then cycles of anything per base unit of time. For example, the number of different wavelengths per 1 selected (metric) wavelength. Or the number of events per 1 wavelength.
In the end we have got the succesfull model with results and, no doubt, such results are verified by measurement. What to wish for more. To solve the problem of elasticity? To find out hidden processes with verified model. To use our model for recognition of hidden processes connected with elasticity? Elasticity is the base of the universe. Without elasticity there could be only singularities (to close 0 or ∞ ). But elasticity needs a space. And the space needs matter – without matter there is no empty space. And matter (excitations of quantum field) needs time. And matter has the weight. To explain the subject of spacetimematter with our simplified models? Nothing against idealization, otherwise we wouldn’t do anything. We have to start somehow. But we need to go further. Correction about the unknown. That is the meaning of knowledge.
The basis of thermodynamics and indeed of the whole world is chaos. Chaos in a closed space. Chaos of moving particles, molecules, atoms. This chaos can be converted into mechanical work by the controlled expansion of initially chaotically moving particles. See an engine piston or a turbine blade.
Let’s have a model – a closed box with N particles. Initial conditions – all particles have the same velocity. After some time we find that the velocities of the particles are different from each other. The speed of movement of particles in a closed space varies – from almost zero speed to maximum speed. There is a kind of average particle velocity that is close to the value at the initial conditions.
the velocity distribution of particles depends on :
– the volume of the box
– the size of particles (very interesting model with particles like ideal points with zero size – these never meet each other)
– the number of particles
particle size is relative to the size of the box
The rule is – the higher the speed – the shorter the duration. The same for the lowest speed. If we want to keep the lowest velocity or high velocity of a particle for a longer time – we would have to isolate the particle from other particles. Perhaps by expanding space or otherwise. See the rubber sheet thermodynamics in two dimensional space below.
See the image below – thermodynamics of rubber sheet, resp. of elastic membrane
Thermodynamics of rubber sheet – elastic membrane full of chaotically moving particles. The disadvantage of isolating a particle is that we can keep its velocity for a longer time, but we cannot change it. Thus, it would be better to adequately stretch the elastic membrane according to how we want to accelerate the particle or, on the contrary, to decelerate it. Which leads to very interesting results.
Go back to thermodynamics before we begin to solve the thermodynamics of an elastic membrane or an elastic volume. How to describe purely chaotic processes? It is impossible to use mathematics. Mathematics doesn’t have a chance in the case of pure randomness. Mathematics has nothing to go on. But thermodynamics exists in clearly given mathematical equations. So the equation pV = nRT, very familiar equation for engineers. Sure, but there are differences. If we have one chaotic environment, we won’t be able to study anything, we won’t be able to describe anything, and we won’t be able to predict or verify anything. See first image below – only one chaotic environment.
So we need to have at least two different chaotic environments. One environment will be the chaotic motion of air molecules in the atmosphere and the other environment will be the chaotic motion of molecules in a closed cylinder with a piston. See a second upper image.
We will ignore temperature for the first time and examine what is called pressure and volume. It is possible to think like this and mainly to measure – volume and pressure – the weight of the mass relative to the given piston area. We obtain the equation p1V1 = p2V2. The multiplication of pressure and volume before expansion is equal to the multiplication of pressure and volume after expansion. Let us remember that we only know ratios and not absolute values. What is the absolute value of chaotic motion – relative to what? Relative to zero? Relative to what zero? To the zero of particles? But zero is not even in a vacuum – see the exclusion principle. Not to mention – relative to zero, everything is infinity! And each unit we have chosen as the base unit of measurement is just a chosen ratio, the ratio between the measured value and the chosen (unit) value. Very important remark for next consideration in thermodynamics – expecially absolute temeperature, absolute pressure, absolute volume, absolute energy, absolute entropy, absolute enthalpy, etc. But the most important remark is about at least two chaotic environments with different characteristics to each other. For example, the expansion of the piston inside the steam cylinder, which is surrounded by dense, viscous oil , such expansion is different to the expansion of a piston inside a steam cylinder in an ambient atmosphere. Not to mention the change in the viscosity especially in the case of a non-Newtonian fluid.
See a cylinder with moving piston inside it. There are two different chaotical environments. See image below.
1) atmosphere filled with chaotically moving gas molecules – see blue dots
2) chaotically moving gas particles enclosed by a piston in the cylinder volume – see orange dots
Both the gas and the surrounding atmosphere are purely chaotic environments. In both environments, molecules collide. But in the case of a gas, we see that the collisions among gas molecules are more violent and more frequent compared to the molecules of the surrounding atmosphere. A gas enclosed by a piston in a cylinder will have tendency to expand relative to the atmosphere until the violence and frequency of collisions among the molecules are equal. Not to mention the final damping oscillations. See image below.
How to measure how to evaluate the violence and frequency of molecular collisions?
Some base units need to be chosen. See next image below.
To put a unit weight on the piston. How many units times the frequency and violence of collisions. At least for the beginning.
Rocket propulsion
Maximum acceleration because of the strength of the material.
straight movement
rotational movement
oscillating movement
to be continued next time …
Steam engines, turbines, pumps, motors, details as valves, propellers etc.
Download |
Download | Download | Download | Download | Download |
Part_engines_I | Technical drawings | Water turbines | |||
Heusinger timing gear | |||||
The post Mechanics and Technology appeared first on ROMANVS Roman Mojzis.
]]>Note at the beginning – appearances of all differentiable shapes, structures, events or processes have probabilistic distribution.
We have an example:
16 balls passing through Galton board – the direction to the right is marked I, to the left 0
See an image below – there is the Galton board with 4 rows.
1 2 3 4 5
There are 5 boxes with 16 possible travel paths (differently coloured). Think an ideal state of probability – 6 different paths lead (are directed) to box 3, 4 different paths are directed to box box 5, 4 different paths to box 2, 1 same path (IIII) is directed to box 6 and 1 same path (0000) is directed to box 1. The final results of possible paths are given in following table – see below
0000 | 0I00 | I000 | II00 |
000I | 0I0I | I00I | II0I |
00I0 | 0II0 | I0I0 | III0 |
00II | 0III | I0II | IIII |
As we see the occurency (frequency) of possible travel paths are the same. Blue marked. There are 16 independent travel paths through Galton board. See an image below
All paths has the same value. See upper image with 16 same identical rectangles – probability distribution. The probability value P is the same for all 16 events. Pure line of “density function” without any curvature. There is no preferred path of balls passing through Galton’s board.
When betting on the lottery, hardly any person will choose the number combination 11111111 or 44444444. The vast majority of people will choose the numbers 27543764 or 17329546 or something else. Because the probability of 8 identical numbers in a lottery is, in their opinion, vanishingly small. That’s true, but it’s also true that their chosen numbers has the same probability as 8 identical numbers. In other words, each different number arrangement has the same (very low) probability of winning.
But have a look at the upper table closely. We see that the probability of a number series with different numbers is more and more greater than the probability of a number series with 1 repeating number (in this case I or 0). In other words we see different occupations of the lower boxes by passed balls. That´s reality. See the next table below (differently coloured) – green colour for the results with one difference, red colour for results with no difference, deep gray colour for results with two difference.
0000 | 0I00 | I000 | II00 |
000I | 0I0I | I00I | I0II |
00I0 | 0II0 | I0I0 | III0 |
00II | 0III | I0II | IIII |
The probability of a result with alternating numbers is greater than the probability of a result with one repeating number. The more repetitions the greater the probability. See below the distribution based on upper table
The distribution is quite different from constant distribution. What is the reason for this difference? However, the probability of each pass is always the same, constant. This is true, but the balls are grouped into one box regardless of the differences in their passage through Galton’s board. Grouping of different paths, different events into one place (box) is the reason why we get non-uniform curve distribution curve of frequency of occurrence (density function).
See below the Galton board with 10 rows and with 11 lower boxes
The distribution of passed balls through such board is illustrated on the upper second image. What is the conclusion? Without collection boxes, without limited spaces into which different events are collected, there would be no observable differences. See differentiability (distinguishability) – the resolving power of people or devices.
Probability to be of use requires distinguishability. Distinguishability of possible events.
Difficult to solve the probability of events on a continuum, where there are no clearly distinguishable possibilities. See the probability of one side falling on a dodecahedron. The probability is 1 to 20. For N-polyhedron, the probability is 1 to N. But at the ball? How to define the side? However, the side will be one point or one small area. Never fall the same deformed side for a real (deformable) ball or one point for an ideal ball. In this case, probability is not calculable. Do not mention the continuum distribution where a certain interval must be artificially created where the event occurs. So the first condition is the presence of stabilized and distinguishable sides, shapes, events or intervals – it doesn’t matter what we call them. This is what we silently suppose, and this is the basis of the probability calculus. Thus, before there was a probability there must have been distinguishable and stabilized structures – so called initial conditions. Probability is therefore a product of initial conditions. Probability cannot explain initial conditions by itself, only to use them properly. The initial conditions more closely correspond to the case of the falling ball. Every “side” of dropped ball is different to each other. There are no clear boundaries (sides). The reason is the irreversibility of processes. If we put a real manufactured ball through measurements of falls – one side will never be repeated. Every time we feel that one side has repeated, we will look closer and see a difference at the molecular level due to thermal motion. The same is true at the quantum level. No two events are identical. Then if we want two so-called identical events, or sides, we have to work the ball into an N-polyhedron. To cancel the expressions of molecular motion to such a level that there is a clear distinguishability – definability of the relevant sides. We must cancel the expressions of irreversibility.
Irreversibility is the basic principle of the Universe. See quantum field with “zero” vacuum fluctuations – completely random and indescribable changes inside that. Still unpredictably “bubbling”. Pure chaos. It’s impossible to get back a reverse motion exactly in such circumstances. Everything is always different and new. Although at first glance it might look the same. Here we have the very important question of how stabilized (regular and predictable) and differentiateable structures can exist in the middle of such a random and indescribable quantum field – see waves, particles, crystals, organisms, etc. These stabilized shapes, structures and processes in the world are always in change, in a slight change but always changing – no two snowflakes are exactly the same throughout the history of the Earth. Like all atmospheric conditions, clouds, every individual of the same species is different from other individuals in the same species. Sure – it looks the same in a first view, but only in a first view, in reality all of nature is still developing – see the Cambrian organisms compared to the Devonian organisms, and both compared to the present organisms.
Each snowflake is different from the others. No two snowflakes are the alike. But all snowflakes have a common characteristic, a common expression of their existence – a hexagonal configuration. Consider snowflakes or every roll of the dice. Every roll of the dice is unique. Surely the dice will land on 1 of the 6 sides. But if we observe and record the process of each dice roll, no matter which side it ends up landing on, we will find that the process of each roll is different – movement versus rotation. Even if we make trillions upon trillions of dice rolls, each roll will be a unique unrepeteable original. This reminds us of the origin – the indescribable and unrepeatable, purely random processes of vacuum fluctuations in the quantum field.
See the motion of an electron around an atomic nucleus. A closer look at the trajectory of the electron would show chaotic irregularities caused by quantum fluctuations in the electric field. The average deviation from the global trajectory is zero, but the root mean square deviation leads to a small shift in the energy level. This shift has indeed been measured as part of the Lamb-Retherford shift.
Each snowflake is different from all others, but each electron is different in its motion from all other moving electrons in the entire universe. And what electrons there must be in the universe! And this leads me to the final conclusion that each vacuum fluctuation is different from all vacuum fluctuations throughout the history of the universe
And again the question, in such a chaotic quantum field environment, where did the cubes (crystals) come from with their sides to fall on? What is repeatability and non-repeatability? See roll of the dice. How many rolls, so many unrepeatable and indescribable processes. But in the end, every dice always lands on 1 of the 6 sides. It’s hard to stay on the edge – a very unstable position.
So is the dice roll repeatable or unrepeatable? In terms of microstates, every roll of the dice is non-repeatable. In terms of the macro states, every roll of the dice is repeatable, predictable within the given possibilities – in this case 6 sides of the dice. For every roll of the dice we need a dice, so we need shapes, distances – in short, space. And we also need time – see each roll of the dice indicates a time duration (repeated oscillation changes). Briefly – Probability needs time and space and distinguishable shapes (subjects) inside. And again the question – where did time and space and distinguishable “regular” shapes or structures come from?
A temporary conclusion? It is not possible to return to the past, but it is possible to change the future, just on the basis of the past. In other words the arrow of time is a given and the impossibility of a time machine also.
The chaos as we know is unstable. Chaotic behaviour is kept together by boundary conditions. Without boundary conditions there is no chaotic behaviour – no chaotic environment. See thermal motion, fluctuations of molecules – Brown fluctuations. In such „thermal“ environment there is complete chaos, unpredictable, incalculable, purely random. No two events are exactly the same – exactly the same in terms of position or momentum of particles. After all, each particle is different from other particles. The difference among them is inexpressible – see irrational numbers. Thus we have described the macroworld, the macrostate of thermal fluctuations. We suppose every fluctuation is non-repeteable original. Through all history of the universe.
Just like every snowflake is a non-repeatable original even though they have the same basic hexagonal structure. And what snowflakes have been created throughout the history of planet Earth. There is no problem to differ in details. We know from mathematics – there are infinity numbers between two any closer points.
Let´s go to the microworld, better to write to the microstate. The universe is filled with an elementary quantum field. In the quantum field there are violent changes of „shapes“ etc. These changes are called vacuum fluctuations. See an image below.
We suppose every fluctuation inside quantum field is non-repeteable original. Every its appearance is slightly different.
What qualifies us to this premise, to this idea? I mean the measurement of Lamb shift for electron. A closer look at the electron’s trajectory reveals chaotic irregularities caused by quantum fluctuations. These lead to a small shift in the energy level. This shift has indeed been measured as part of the Lamb shift.
Chaotic random environment. Everything that happens is always different and new, unrepeatable. No two events, no two appearances are the same. In such an environment, it is not possible to be repeatable – to return exactly to the initial conditions. There can be no reversible processes in a purely random environment.
See balls collision in macrostate. It is not possible to reverse the trajectory of colliding balls exactly after their impact. Even if we set the initial conditions as precisely as possible. For the reason of thermal fluctuations of molecules and atoms. It is very difficult for accuracy to take the velocity from slower ball to faster ball. In an angled ball impact, the faster ball will transfer some of its speed to the slower ball at any angle, but the slower ball can only transfer its speed to the faster ball at a perpendicular impact. The more difference of velocities of two collided balls the more accuracy.
At first glance, it appears to us that one ball stops and the other moves with slightly more speed. See the impact of the billiard balls. But there’s not much difference in speed. If there is a difference of many orders of magnitude between the velocities of colliding balls of the same mass, then there cannot be a complete transfer of kinetic energy. That is, the slow ball will stop and the order of magnitude faster ball will be faster. This is due to thermal fluctuations. The greater the temperature the more unstable the impact, the more marked the difference in the case of two or more orders of magnitude differences in the speed of the colliding balls. For temperatures close to absolute zero, the impact will be close to ideal. In other words, here the instability in the complete transfer of kinetic energy will only become apparent in the case of many orders of magnitude differences in the velocity of the colliding balls.
Not to mention every impact needs some time to do so. The problem decreases with elastic behaviour of balls.
At the level of microstate: elementary particles – these particles (probabilistic appearances, excitations of the elementary quantum field) are influenced surrounding environment of quantum field – see Lamb shift in the case of an electron.
What does it mean? In microstate of quantum mechanics there is no chance for pure reverse motion at all. But equations of quantum mechanics are reversible. But the influence of random environment may not be marginalize. Equations represent a model, a simplification to solve something, equations don´t represent reality. At the level of microstate it seems for the first view, it is possible to reverse the arrow of time, but if we have a closer view we are able to observe impossibility to satisfy initial conditions in reverse motion. There is very small but respectable influence of random vacuum fluctuations like in thermal fluctuation of macrostate.
2nd law of thermodynamic is based on pure chaotic system where every process is unrepeteable original – not only Brown motion at macrostate but also vacuum fluctuations at the level of microstate of quantum field. It seems for the first view the reverse motion is possible, but at microlevel classical mechanics is not useful – we must solve probabilistic appearances of particles.
to verify the above suggestions:
1) to measure Lamb shift – its irrationality and its non-repeteability appearance – see each electron has a different path through its orbit
2) to measure collision of two elementary particles, especially when one of them will have more greater velocity then the other one (the very big difference of „speeds“ between them – there cannot be a complete transfer of momentum between the two particles – if only because both particles are excitations of the same quantum field with random behaviour.
3) to measure the impact of balls with many orders of magnitude difference in velocity at different temperatures
Quantum mechanics equations which have not mention upper facts are incorrect. They are good like a model for the first view. Such equations are very brilliant success of human thinking – but these equations are just a model for very useful practical calculations. The model cannot include everything. Especially the randomness of the environment whose excitation is then solved in the form of wave mechanical equations. But it is impossible to solve by them deeper level of knowledge the omnipresent reality – see the three-body problem in classical mechanics.
See next illustration – every electron jump after excitation from a higher level to a lower level in unique way, (the influnece of omnipresent pure chaotic environment like quantum field) See below line spectrum with probability distribution of emitted radiation with two peaks.
No two wavelenghts are alike. No two wavelenghts has same wavelenght. But they have a common denominator – Planck’s constant. Roughly written – Planck’s constant is the peak around which the wavelengths of emitted line spectrum radiation are piled. Planck’s constant slightly changes depending on the “pile-up” of emitted photons
See wavalenghts from upper image 1 > 2 > 3 > 4. The average wavelenght does not really exist. Such wavelenght is only calculated peak of probabilitisc distribution of emitted photons with different wavelenghts. I suppose there are no two wavelengths that are the same in any limited interval. See in mathematics – there can be infinitely many other real numbers in any small limited interval of rational numbers on the number line. These rational numbers differ from each other in ever smaller values.
Every wavelenght is unrepeteable original. We can certainly approximate the wavelengths of the emitted radiation in number to a certain number of decimal places. And for calculations with any accuracy, this is necessary. But let’s keep in mind that each wavelength is a non-repeatable, irrational number. Distinguishability of electrons in accordance of their position. Every electron has unique position to each other. Such position could be expressed by irrational number.
Let’s go on. Not only each emitted electron is a unique, unrepeatable original, but also each part of the vacuum fluctuation of the quantum field. Particles in form of shaped waves are product of quantum field. They are still influnced by self-similar excitations and excitations of other particlecs (packs of waves).
Very simply written – the quantum field full of random and “shape” non-repeatable vacuum fluctuations is very productive environment for dissipation. It is impossible to exactly repeat any process and even to return to the original conditions before the start of the process. There will always be an unrepeatable change, there will always be a different state from the previous one. The slightest change at the quantum level in the dimensions of Planck’s constant are very small, but not so small with respect to particles. Especially when we know that particles (the probabilistic appearance of a wave “packet”) are a product (excitations) of the quantum field. These particles therefore respect the environment in which they arise. And the environment – the quantum field – is random and indescribable in all directions. This brings us to the so-called the arrow of time, which is also valid at the micro-level of the quantum field. And not only at the macro level – bodies in mechanics, molecules in thermodynamics.
Example: in mathematics, cryptography, we have one 100% efficient cypher and that is a cypher using a series of random numbers. The principle is that we add the message code in numbers to a series of random numbers. The series of numbers thus added will again be random. The disadvantage of this cypher is double, firstly the series of random numbers must not be repeated and secondly the receiver and sender must first exchange the series of random numbers in some other way. This is the principle – randomness absorbs regularity. And so it is with the quantum field. That’s why particles are constantly being renewed.
Conclusion:
Upper text explains the arrow of time based on the fact purely random behaviour of quantum field. Such quantum field fills the entire universe with all its visible and invisible matter. The indescribable random behaviour means a one directional arrow of time thus also in the case of the collapse of the universe back to the singularity there will be no reversal, but new previously unrepeatable events.
The bound chaos is the source of the arrow of time. Unrepeatable originality in every moment, in every progression, in every change. The whole of Nature is modulated by these purely random chaotic fluctuations. Structures (basic laws, basic appearances) are given – basic features of e.g. species – an apple or a pear or a snowflake. Every snowflake is an unrepeatable original, just as every apple or pear or any other biological species or any chemical substance or chemical element is an unrepeatable original under a strictly given structure – see the hexagonal structure of every different snowflake. No structure is able to support, to sustained itself. No mention how complicated such structures could be. Finally, even chaos itself is unable to support itself in chaos, for chaos cannot sustain itself in chaotic behaviour.
What is the difference between order and chaos? Order is describable as opposed to chaos. See mathematical functions (sin x, log x, etc.) – a definite order, not allowing for even the slightest exception. Whereas chaos is indescribable. Although in terms of thermodynamics, we can generally describe the behavior of many chaotically moving particles. And here we are in statistical physics. But let’s go back to probability. In terms of frequency distribution, it doesn’t matter if the source (e.g., a coin toss) is regular or purely random – we get the same probability distribution curve for the frequency of tosses.
There is another common denominator between order and chaos. These describable (e.g. log x, x^{2} ) mathematical functions are very tightly defined without any exception. Very hard prescribe, very reckless to surrounded circumstances. The same situation as with total chaos. Very reckles of the circumstances, also. How to define very beautiful curves like woman´s face? How to define development of shapes from the germ cell through childhood to adulthood? Especially to the final curves of the beautiful female face. A pessimist will say beautiful, but unfortunately only for a limited time. An optimist will say, at least beautiful for a while. And the realist will wonder why that is. Why beauty is so limited in the world, it’s good that it is, but why so briefly.
And a very frequent question – where did such beautiful shapes come from in nature that are neither chaotic nor strictly mathematical?
Or another question. Can there be a probability calculus without distinguishable structures? Whether shape or positionally distinguishable. Imagine an ideal ball, instead of a dice. For every roll of the dice, one of the six sides will land. But what about the ball? Where does it land, at what point? A different one each time! How many rolls of the ideal ball, so many different points. If we throw a 20-sided “ball” (icosahedron) we get one side out of 20. If we throw an N-polyhedron, then the result of the throw will be 1 side out of N.
In conclusion: if we want probability as we know it from ordinary calculations, then we need certain distinguishable bounds, limits. Limits of the number of sides, or positions, lengths, time, shapes or structures. Without distinguishable limits and bounds there can be no probability calculus. The probability depends on the distinguishability of limited structures.
Go back to probability. To better illustrate, let’s have a coin that we toss as many times as we want. Surely, every coin toss is a unique process. But in the end, every coin falls on one side or the other. Heads or tails. Heads will be marked with an I and tails will be marked with a 0. The question is how to predict which side will toss. We cannot answer which side will fall on the next toss – prediction is impossible. We only know, as a frame of reference, that the more tosses, the more the frequencies of occurrence of I and 0 will be equal. The probability of one side falling is still 1/2 – even if the same side falls in succession without interruption. Even if the same side I falls 10 times in a row, it doesn’t mean that the probability for the next toss changes. This moves us into the area of fairness of conditions – a fair coin and a fair toss. If we kept getting one side of the coin in the first 10 tosses, does that mean that the coin or the toss conditions are not fair?
Interesting question – how do we know if a coin is fair? It’s easy to tell, the frequency of I (Heads) will be more or less equal to the frequency of 0 (Tails).
Pure probability is only an ideal state. As ideal as a ideal line or a ideal point. There is no such thing in real nature. It is a human abstraction. We cannot realize an ideal point in the world, in the real natural world, just like an ideal line or an ideal probability.
The Galton board represents to us a probability distribution – see below
Let’s have a thought experiment. At the beginning of the board with one sharp edge we will have balls falling, regularly alternating left and right. We’ll know exactly where each ball will fall, whether to the left or to the right. If we have 32 balls, they will be regularly divided into 16 balls left and 16 balls right. And the situation is repeated on the two sharp edges in the second row of the Galton board. Again the balls will be regularly divided into two halves on both edges. And this situation will repeat as many times as we have rows of sharp edges.
Now let’s have a real experiment with the Galton board. From above we have balls falling, at first on the first row with one sharp edge, and after the division the balls continue to fall on the second row with two edges, then on the third row with three edges, then on the fourth row with four edges, … until …to the Nth row with N sharp edges.
What is the point of the above two experiments? Thought and real? To realize that one ball will keep falling to the right and the other ball will keep falling to the left. This is the case when the initial number of balls is equal to 2^{N }
How is it possible that the balls on one edge do not fall alternately to the right and to the left? In short, why do they fall unpredictably? Once to the right, then to the left and then twice to the right, then again to the left and then five times to the right and then twice to the left then again to the right and then three times to the left?
Why can’t we predict which way the next ball will fall? Only to know the probability of one side is still equal to 1/2 regardless of the falls that have already taken place. Even if any previous series of sides fall, the probability will still be 1/2. Even if the same side falls ten times, the probability will not decrease, but will still be 1/2. How is it possible for a series of 100 identical sides to fall, for example, to the right? That is no longer possible! Yes, it is! It is possible for 1,000,000 equal sides or trillions of trillions of equal sides to fall in unbroken series. Remember the thought experiment, if we have 10 rows on a Galton board, with 2^{10}^{ }balls (1,024 balls), one ball will have to fall continuously to the left and the other one continuously to the right. With 100 rows on Galton’s board, if there are 2^{100} balls (approx. 1.26E+30), one ball will still have to fall to the left and the other one still to the right. And with 1,000,000 rows on the Galton board we will have to have 2^{1 000 000} balls (a result is too big for my calculator). In short, with N rows on Galton’s board, we have to let 2^{N }balls pass through so that one ball keeps falling to the left and the other one ball keeps falling to the right.
So the probability depends on the initial number of balls and the number of rows?
Let us return to realizable experiments. For example, we have a Galton board with 10 rows of sharp edges. The passing of the balls is random, but as a result we get a typical Gaussian curve of the normal distribution of balls after passing through all the rows. See below.
We also know that sometimes, we don’t know when, one ball will keep falling to the right (we can video it and see that was e.g. the 81 ball that kept falling to the right). The question is this – is it possible for the ball that keeps falling to the right to fall at the beginning, in short, to be the first serie? Calculate the probability of this event yourself – you know the number of balls and the number of rows.
And next question – what if the experiment is cancelled? That means, the sharp points of the 10 rows of Galton’s board are not fair, and neither are fair the balls? How do we evaluate the fairness of the conditions?
Galton’s board with 10 rows. Is it possible to have a situation where all the right sides fall at the beginning? Or for the bottom boxes of the board to fill up regularly from right to left according to a Gaussian curve of normal distribution? It is possible, but the probability would be very very low. Just do the analysis for a plate with three rows. Not to mention for 10 or more rows.
A crucial consideration in conclusion on the probability.
We can’t predict what will happen in the microstate (which ball will keep falling to the right), but we can predict the macrostate for a given number of balls versus the number of rows of Galton’s board, that one ball out of 1,024 balls at 10 rows board will keep falling to the right. The nature of probability is the “violation” of regular oscillations (back and forth). In other words, how is it possible for more than one ball to fall on the same side again and again in series of 2, 3, 4, … or N, or N+1 balls?
To be continued next time.
How to obtain such beautiful course of density function (see below), if we know every event has the same probability?
Yes, every possible event is non-repeteable original. The result is the constant line of density function of realized events. But to obtain very pretty density function (see upper) we have to do something. To ask ourselves for common charasterictis of passing balls or tossed coins – e.g. the appearance of two heads.
a) Imagine modified the Galton board with 4 rows with separating edges only. The function of such board is only to separate passing balls. See below.
How many separations are there, so many boxes. There are 1 + 2 + 4 + 8 separations at 4 rows with 15 edges. Summary thera are 16 collection boxes below. The result is the line of probability distribution.
If we imagine the classical Galton board we see there are separation of passing balls and to connecting partly of them again. To connect passing balls for the next division in the lower row.
The cancelling of the previous separation and start again with the separation of balls regardless of the previous history of the separation of passed balls. The result? The number of collection boxes is less than the number of collecting boxes in modified Galton board. In other words, there is a slippage between collecting boxes.
In classical Galton board there are 4 rows with 10 edges. In the first row there is only 1 separation of passed balls. In the second row there are 2 separations (-) and 1 reconnecting (+) of passed balls. In the third row there are 3 separations (-) and 2 reconnecting (+) and in the final (fourth) row there are 4 separations (-) and 3 reconnecting (+) of passed balls. When we make a much larger number of rows we ge the resulting “curve” close to the ideal curve of the Gaussian normal probabilistic density function – see below.
Notice: difference between probabilistic distribution and density function is clear from upper image. Probabilistic density function is continuous and smooth, but probabilistic distribution has disconnected appearance – see columns of values.
See below a table. There are topics (considerations and suggestions) on the subject of this section. If you want, download the pdf file.
Download |
Download |
Download | Download |
Probability_meaning | Ideal_versus_Real_Probability | ||
The_probability_1 | |||
The_probability_2 | |||
The post Probability, randomness and necessity appeared first on ROMANVS Roman Mojzis.
]]>A few phrases at the beginning
…in the beginning was chaos …. from chaos arose order ….. or chaos is the basis for the principle of self-organization which explains the present universe with its complex structures … etc.
Very interesting question – the existence of order in a chaotic environment. Order in chaos, stabilization of forms and events in a chaotic environment – See the arrangement of crystals or the beauty of flowers
How to define the chaos? For one thing, we need something for the chaos. Forms associated with events. The chaos is a process with random behaviour of distinguishable forms. These forms are indescriabable. Always different to each other. The forms differ from each other, even if only slightly, but they still differ – to infinity. From mathematics we know that between two points, slightly apart, we can place an infinity of next different points.
What is the origin of chaos? E.g. for so-called self-organization theory. How does chaos self-organize?
Self-organization theory – the reciprocal movements of a complex system controlled by the laws of non-equilibrium thermodynamics. According to this theory, the system can “spontaneously” organize itself if energy flows through it. In other words, if we have energy differences and a chaotic system among them, and we start the energy flow given by energy differences, then the initially chaotic system will start to organize itself into higher orderly predictive complex structures. This is verified many times not only in thermodynamics, physics, but also in chemistry or biology. The proven theory in practice many times. But it adds more questions to the origins of life and the origins of the universe than it explains them. On the one hand, the requirement of an energy differences. Secondly, an impulse triggering the flow of energy. Furthermore, the regulation of the flow of energy and, above all, the origin of energy, or what we call energy.
It is very unpleasant to expect chaos to infinity. See an image below
Chaos everywhere. It doesn’t matter if the chaos of thermal or vacuum fluctuations. There is a contradiction. But what is valid for a finite, however large, number of particles and boundary conditions may not be valid for an infinite number of particles. Especially for indefinable fluctuations of the quantum field. There are already contradictions in chaos that do not cancel out in its application to the infinite, infinite expanse of chaotic behavior – fluctuations, gas molecules, etc. Chaos, chaotic fluctuations cannot be stable, they must move, or they must arise and disappear, there must be mutual collisions, mutual interaction. This is a question of further research, or considerations based on detailed observations on the basis of derived base units. Not to mention the necessary and unquestionable existence of at least two different chaotic environments – the basis of thermodynamics.
The question of what is beyond chaos is meaningless. It doesn’t make sense in the view of derived units from regular appearances modulated on chaotic fluctuations. Just like asking what’s outside the universe. The universe and its internal parts, including the chosen units, behave as they do. There is no external observer, only internal observers with internal relationships. The question of the outside is not a matter of science, but a matter of Faith – and that is another topic.
What is the meaning of chaos? How can chaos exist? Where did chaos come from?
Chaos can´t exist by itself, by its inner power, by its nature. It can´t hold together by its own properties. Chaos cannot sustain itself in chaotic behavior. Chaos must be sustained, allowed to behave chaotically. Firstly – to be in motion and secondly to be limited from the outside or inside. Without allowed motion and limiting conditions, the chaotic particles (waves) would collapse in themselves. Or they would have drifted apart indefinitely. Very roughly – See particles of gas. These particles must be in motion and to be limited in closed space or limited by gravitational field. The gas molecules would either collapse into each other or move away from each other.
The gas is defined as the collection of particles (waves) as such exists only in closed space or with gravity forces. Without gravity forces or without closed space there is no gas as we understand it – colliding particles. To stretch the space infinitely – there are no collisions, no gas, no random fluctuations, no chaotic behaviour. Chaos without collisions is not the chaos. Collisions are given by external forces – gravity or closed space like vessels or anything else. We quietly postulate this, but it’s important to think about it in more detail.
Without a closed space or gravity forces, there is not only no gas, but there is no chance to condense into a liquid or solidify into a solid. See phase transitions.
Chaotic behaviour needs bounds. Chaos needs boundary conditions. Without bonds, without boundary conditions there is no chaos, no chaotic behaviour.
Two conditions defined the chaotic behaviour of gas
1) the limited space
2) the motion of parts.
The gas need limits – without limits gas could not exist There must be limits for interaction – reciprocal collisions among particles of gas.
It is very difficult, if not impossible, to apply partial knowledge of the classical behaviour of gases to an unlimited number of particles, waves or various fluctuations.
Let´s start with the origin of chaotic environment. E.g. the balls in closed box. Let´s start with one ball in close box. See below
In the beginning there is the ball with initial velocity. The ball is bounced off the wall of the box. But the bounce will have an angular tolerance. See grey filled area. We can predict the maximum number N of bounces based on a mathematical model. And that’s the point. The range of tolerance will increase with every bounce. And after a while, the angular tolerance will be 360 degrees. The mathematical model ends. Further calculations would be useless. The range of angular tolerance is influenced by the shape and texture of the ball, the course of its stiffness, the properties of the box wall, etc.
The more balls inside the closed box the shorter the transition time to chaotic behavior. But it depends on the diameter (size) of the balls relative to the size of the enclosed box. The smaller the size of the balls relative to the box, the longer the time it takes to transition to chaotic behaviour. How to determine the transition time? There are variable inputs: Size or volume of box, size or diameter of balls, the number of balls, the regularity of balls and the regularity of box, the regularity of mechanical properties of balls and the enclosed box, the temperature of balls and the enclosed box, surface roughness, etc. Is there any sense in analyzing and solving all this inputs? The fundamental parameters are the ratio of the size of the balls relative to the box and their relative velocity, together with the experimentally measured angular tolerance when the ball bounces off the wall and between the balls relative to each other.
The result is the time of transition of the balls into chaotic behavior. The transition time is determined also by the quality of the balls and the box – their regular shapes, mechanical properties together with the quality of the material from which they are made and also by their temperature. And also our measurement – or rather its accuracy. Even if we are the most accurate in our measurements, we will not get any further. The higher the temperature, the higher the fluctuations.
There will always be a transition to chaos. Even in the case of so-called ideal conditions – ideal spheres, ideal materials, ideal surfaces, ideal zero or equilibrium temperature together with ideal fluctuations – there will be a transition (extremely long time) to chaotic behaviour. This is because of the maximum possible mathematical accuracy – the number of decimal places.
Temporary conclusion: Does it make sense to make short-term predictions? It does. It is necessary to know what will happen in the near future and to verify the quality of the mathematical models. Does it make sense to make long-term predictions of ball behaviour? It doesn’t. Because in the end, it all dissolves into purely chaotic behaviour.
Replace the closed box with two horizontally spaced walls. The ball will bounce repeatedly between them See below
The measured position of the ball is in the middle. The direction to the right shows the future and the direction to the left shows the past. It is clear from the image that the position of the ball, or the uncertainty of its position, increases both to the right (future) and to the left (past). The degree of uncertainty depends on many conditions – the shape and quality of the ball and walls, their mechanical properties, etc.
There are three possibillities
1) in the past – everything is hidden in chaotic behaviour
2) in the future – everything is hidden in chaotic behaviour also
3) the actual state, with some uncertainty in the future or into the past
... to be continued
The situation with the chaotic behaviour not only of the gas but also of the quantum fluctuations is similar to the probability calculus. Particles with no external limits would expand to infinity. Thus, there is no chance of at least two particles meeting. Likewise in probability – there must be limits (edges of the dice, sides of the coin, given possibilities, etc.). If there were no limits we cannot evaluate probability. It is hard to calculate the probability of one hard-to-differentiate event out of an infinity of possibilities of hard-to-differentiate events. However, we know that when we look more closely we discover still new details, and likewise when we go the other way, to a great distance – still new and different structures.
How to cancel chaotic behaviour? How to cancel gas properties? Very easy. Put the gas into closed box. After that let´s fly to intergalactic space. Open the box and all chaotic behaviour will be over. Gas will not exist from this moment. But each particles of the gas had some movement? The movement didn´t dissappear. The particles are free to move freely through space, each in the direction it had after leaving the box. At some distance from the box, the particles will occasionally collide, but at multiples of the distance from the box, each particle will fly to its own side without ever colliding with another particle. Until particles reach the nearest galaxy with their stars and planets.
On the other hand we can cancell the chaotic behaviour if we reduced the particle speed to zero then particles would collapse in themselves.
Go on! Imagine only particles in closed box in empty space (without galaxies) – imagine material particles of chaotic gas motion. Every particle has its own mass. Every particle weighs something. We will open the box and what will happen? The particles are free to move freely through space, each in the direction it had after leaving the box. But we are in the empty space without galaxies. In other words, the particles fly into free space and if their velocity is less than the escape velocity they collide again at the starting point. But the gas will no longer exist. The particles will be motionless together, at most oscillating due to internal atomic motions.
If their velocity is greater than the escape velocity, then the particles will move away to “infinity”. But we already know from inertia and spacetime that the slightest matter deforms spacetime. In short, the free particles will move for a long time through the gently deformed spacetime until they come together again at another place – after many orbits of the curves of the deformed spacetime.
Imagine our universe which is filled by moving particles. Every particle has its own mass. These particles interacts with each other by collisions among them. They are attracted to each other, but they will never be away from each other. The range of their appearance defines the dimension of the universe. It doesn’t have to be just particles, but perhaps the basic vacuum fluctuations of the quantum field.
Let’s try the following suggestion – in the beginning of the universe there was no chaos. In the beginning there were primordial origins of boundary conditions. The result of boundary conditions is the chaotic behaviour of everything inside the boundary conditions. Not mention wildly „bubbling“ quantum fluctuations.
It is impossible for ours (humans) to explain the subject of primordial origins of bounding conditions (gravity, closed space) by derived thermodynamical equations describing behaviour of bounded chaotic gas which depends on upper mentioned bounding conditions.
If the chaos can´t support itself then the chaos is unable to form itself to higher structures. Especially organized organism. The flow of energy through chaos doesn´t solve anything. See self-organization theory. We need chaos, resp. at least two different chaotic environments with the difference of an energy levels, after that there are some germs of organized structures.
Go back to thermodynamics – the well-known thermodynamic equation for the chaotic behaviour of an ideal gas for a given pressure, temperature and volume, the equation pV = nRT needs at least two different chaotic closed space with the ideal gas, it doesn´t matter if the closed space is the universe, the atmosphere of the Earth or closed vessel or a cylinder with a piston inside that.
For thermodynamics we need at least two different chaotic environments. One of which is closed. See a closed volume of gas in the atmosphere. Two differently chaotic environments in terms of the intensity of chaos. Only then does thermodynamics begin with its equations. As in physics or mathematics, we must have at least two different dimensions or values – distance, volume, area, force, speed, time, intensity, voltage, etc. Then we can compare, measure and solve. Then we can generalize and predict and verify by measurement, etc.
To have energy we must have at least two or more wavelenghts. Only one wavelenght there is no ratio – only one value equals to 1 (or anything else as we wish). The value of energy is given by ratio of measured energy to base unit for energy.
Conservation of energy (matter) is valid only for isolated space like in second law of thermodynamics. The sum of total energy before the experiment in isolated space is equal with the sum of total energy after the experiment.
The law of conservation energy or of conservation shapes or structures like pots or artworks at all. The law of conservation of information? The question is the category of space – open, closed or isolated space. There could be a change in the category of space in the universe.
Not only the law of conservation energy, but conservation of frequency or shapes, space, volume or artworks which are sometimes broken? Where is the point of view? What about the frequency. What does the frequency mean? Regular changes or nearly regular changes? See probabilistic distribution with peak in the middle. How long the frequency must to be or how many? Where is the point of view for valuating what is frequency, how many are there parts and so on. What about the part of frequency? Like the part of the pot which is created by the potter. Such laws are valid only for isolated space. And what does it mean the isolated space? Is it even possible to have that kind of space in the universe?
Where did the information come from? Where did all “regular” changes come from in such indescribable random cirmcumastances of quantum foam. new music songs, art works, inventions, new ideas, etc. The law of conservation of new music songs, art works, inventions, new ideas, etc. information? See entropy – informations, shapes and structures.
See thermodynamics
There are a large number of equations of state for real gases. These equations become more complicated the higher the accuracy required and the wider the range of pressures and temperatures we want to describe and the closer the real gas state is to the critical point.
In the beginning we have chaos – the gases expand as they want, one more, one less, each time differently – at first sight we have no chance to describe the behaviour of the gases.
A few scientists (Boyle, Mariott, Gay, Lussac, Pascal, Torriceli, … ) start to take a closer look at the behavior of the gases. They enclose the gases in containers with a piston, measure (using their chosen units of temperature) what has changed (temperature, pressure, volume), isolate the gases from their surroundings and behold very simple elementary math formulas come out.
pV = nRT where p-pressure, V-volume, T-temperature, R-gas constant
After next time, we find when we go into detail that the calculated values differ. Let’s refine our equations with more measurements. We get more complex formulas – not so simple.
van der Waals
not mentioned constant a, b.
And if we go into even more detail – we have a choice – very complex formulas with a certain accuracy for a given region of physical conditions we obtain the BWR equation (Benedict, Webb and Rubin)
In the beginning there is a chaos of the gases behaviour. In the first time there is a pretty simple and beautiful equation for ideal gases. In the middle there is quite complicated but also beauty equation. In the end there is an chaos of very complicated equation which are valid for very narrow range of pressures and temperatures. There is again a chaos, but in descriptiveness. See a curve below.
chaos equations over-describe,
In summary: The more the equation describing the physical process is more general the more beautiful such equation is and vice versa the more concrete equation is the more its beauty decreases. Thus, beautiful equations have a wide range of validity with poor accuracy and less beautiful equations have a very limited range of validity with high accuracy. The simple thermodynamic equation of state is not applicable in practice. Respectively it is applicable, but as a very inaccurate view and even for light gases such as hydrogen, helium. When using the Van der Waals equation, the situation is better. Furthermore, we use specialized equations with a defined interval of their validity. Example from practice – calculation of a circuited hot water network (not mention a circuited steam network) – even if we use the best equations, as accurately as possible, if we specify the appropriate water values for the given conditions – we still get only frame results. And the hot water network has to be finely regulated in practice.
In other words – without measurement and control, technical processes are not possible in practice. Just by equations we can set a given range, but without external regulatory interventions the function of hot water systems, steam power plants, rockets and in general all technical infrastructure is impossible. In the same way it is impossible to accurately predict the behaviour of billiars balls after n-collisions. Not to mention legendary three body problem formulated by H. Poincare
This is what makes the engineering interesting. The situation is different every time, even if the calculations show the same results. You still need to respond differently. There’s always something to discover, like in science. There are always new adventures of experience until Infinity.
Summary: Thermodynamic processes served in their time as the basis for the derivation not only of entropy, but of all considerations of energy and, in fact, of the behaviour of the whole universe. When the basis is the same, it is hard to believe that the description of the thermodynamics of a collapsing star, or processes in dust nebulae, processes in star-gas systems, etc., that these processes will proceed ideally according to framework calculations as opposed to difficult (mostly iterative) calculations of the motion of three bodies, or processes in steam systems, or in the production of artificial diamonds, etc. It is hardly possible to predict the future evolution of the universe on the basis of idealized quantum mechanical equations together with the equations of general relativity. And vice versa to accurately document the history of the universe up to now.
The very interesting thermodynamic process – isothermal expansion of an ideal gas
At the beginning of such process there is the ideal gas with its characteristics as temperature T, pressure P , volume V, energy U, number of particles N, density D, entropy S. We will examine this process in detail in terms of the base units – the definition of distance (volume), pressure and temperature. At the beginning of the isothermal expansion there is an initial state with following values of T1, P1, V1, U1, N1, D1, S1
in the end of the isothermal expansion there is a final state with following values of T2, P2, V2, U2, N2, D2, S2
T1 = T2, P1 > P2, V1 < V2, U1 = U2, N1 = N2, D1 > D2, S1 < S2. The temperature is constant as well as the internal energy and numbering of particles, of course. The pressure after expansion decreased, as well as the density, which are both connected. On the contrary, the volume has increased along with the entropy, which are both again connected. The internal energy of expanding gas – it is constant. Throughout the isothermal process, the reduction of internal energy is compensated by the supply of thermal energy from the outside. Which means that 100% of the supplied external thermal energy is converted into mechanical work. The chaotic behavior of the ideal gas molecules is converted into a directed movement of the piston in a given direction, with the efficiency of 100 %. At first it looks amazing, but we need to go further. See pressure reduction together with volume increase, Thus a change in the ordering of the possible states of the gas molecules – the entropy. The adiabatic expansion also converts the internal energy of the chaotic ideal gas into useful work of the directed piston motion. But not with 100% efficiency. Then, if there were to be 100% efficiency, the internal energy of the gas would have to drop to 0 – in short, the gas temperature would have to be 0 Kelvin – which is impossible.
It is good to remind and discuss the experiment with the compression or expansion of electromagnetic radiation enclosed in a box (mirrors, for example). The so-called Einstein experiment. When the box is compressed, the wavelength of e.g. light is reduced, so the frequency f is increased, which means an increase of energy E = hf. The increase of the energy of the enclosed radiation is at the expense of the work supplied from the outside in compressing the box.
It is good to remind and discuss the experiment with the compression or expansion of electromagnetic radiation enclosed in a box (mirrors, for example). The so-called Einstein experiment. When the box is compressed, the wavelength of e.g. light is reduced, so the frequency f is increased, which means an increase of its energy E = h*f, where h is Planck´constant. The increase of the energy of the enclosed radiation is at the expense of the work supplied from the outside in compressing the box. See below an initial state wit enclosed wavelenght with initial energy E1 = h*f1
after expansion the wavelenght in the closed box will be longer, there will bel longer wavelenght with lower frequency which means lower energy E2 = h*f2 – see below
This makes it very easy to demonstrate isothermal and adiabatic expansion of radiation. In adiabatic radiation expansion, the situation is simple – the wavelength becomes longer, the frequency and thus the energy becomes lower. The longer the wavelength expansion, the greater the efficiency. 100% efficiency is unreachable – the wavelength would have to be extended to infinity, i.e. the frequency would be reduced to zero.
But the situation is different in the case of isothermal expansion. Here the wavelength and therefore the energy remains constant. Space is becoming longer, but the wavelength is still constant. Which is possible if the decrease in frequency (wavelength expansion) is supplied from the outside so that the frequency and hence the wavelength remains constant. But that’s a contradiction in terms. For the wavelength of radiation to remain the same at a given extension, it means moving the piston of the enclosed space a distance equal to the wavelength. So there will be two wavelengths in the enclosed space. Or three, or four, … or x, depending on how many wavelength periods there were to begin with. See below – initial state for expansion
In the inital state there are four periods f1 of radiation oscillation. There are two courses of thermal expansion – adiabatic or isothermal.
Firstly the adiabatic expansion. The final state for the adiabatic expansion – see below
After the end of adiabatic expansion there are again four periods, but with longer wavelength and lower frequency h2. The difference in energy E1before and after E2 the expansion is equal to the work A done by the piston. A = E1 – E2 = hf1 – hf2 = h (f1-f2). Part of the radiation energy was used for the work A done by the piston. The longer the path of the expanding piston under the pressure of the radiation, the lower the energy (frequency) of the radiation enclosed in the cylinder. It sounds logical. But go on to the isothermal expansion of radiation.
The final state for the isothermal expansion – see below
We see that there has been the addition from four to six of periods. Somewhere from the outside. Thus, the enclosed frequency remains constant. What about the work done by the piston? The radiation energy in the cylinder is constant, there is no reduction. The energy is still equal to E = hf. What work A did the piston make during a move of a distance of two periods? The energy E of radiation is constant!
By the way. Imagine a moving ball in the same closed space – the cylinder with the piston inside that. Such ball has kinetic energy Ek. If it is an adiabatic expansion, the kinetic energy Ek of the ball is reduced at the cost of the work A done by the piston. If it is an isothermal expansion, then the kinetic energy Ek of the ball remains constant – because it is kept on the same level by supply the energy Es from the outside. Es = A. There is a decrease in the kinetic energy of the ball, but its energy is restored to its original amount from the outside. The velocity of the ball will be the same after the isothermal expansion. It is the same with the isothermal expansion of radiation. As the radiation expands, its energy decreases. It depends on the piston displacement. That’s the work done by the piston. But the frequency of the enclosed radiation is from the outside returned to its original level – before the expansion.
Go back to isothermal expansion of enclosed radiation. It is interesting that the change of the piston position in the case of isothermal expansion takes place in steps equal to the wavelength of the enclosed radiation. Or the piston may move by distances equal to multiples of the wavelength of the enclosed radiation. The piston has allowed states, allowed positions, allowed distances. In the case of an intermediate position is not an isothermal expansion. See allowed positions of electron orbitals.
There is a prolongation of the enclosed radiation during the isothermal expansion. The frequency is constant during the “jump” or quantum expansion, but there are many more wavelengths, which leads us to another interesting question about expansion or changes in spacetime in general.
Let´s imagin the following thinking experiment. No adiabatic expansion, but adiabatic compression of radiation in closed box. To adiabatically compress radiation from outside then we need to supply the work dA. The energy of radiation inccrease to dE = h*df = dA. That’s fine, the change in frequency and therefore the energy of the radiation is equal to the work dA supplied. But suppose we want to compress the radiation to a wavelength close to zero, i.e. the frequency and hence the energy will increase towards infinity. How to do that? Where to find in the world, in the universe, a force so powerful? It is most likely to be found in a collapsing very massive star where the thermonuclear reaction stops and there is nothing to balance the gravitational force. Since we are on a thought experiment, there will be an empty space inside the collapsar filled with enclosed radiation. See below
A collapsing massive star goes into the neutron star stage. And the enclosed radiation must be compressed with enormous force. But what is the strength of the gravitational field inside the star? Maximal or minimal? The intensity in the center of such star is exactly equal to zero. The radiation inside such a massive star, a neutron star, is gravitationally isolated from the neutron star. See below the intensity K and potential U of the gravitational field inside and near a very massive ball.
Why do I write down these articles, these reflections? Because the above gravitational intensity and potential profiles are derived from idealized models where we assume a continuous gravitational force. Infinitesimal calculus – differential equations. But gravity cannot be continuous – it does have its bearers. That bearers are grouped atoms, or then particles (neutrons) at close distances. The grouped excitations of the quantum field. But what do we know about such tight excitations? Are they even possible from our perspective of sparse densities compared to the density of a neutron star? Or how does the quantum field behave in such a massive star? However, this massive star, even if it collapses, is the result of a quantum field. Does that mean the quantum field is collapsing as well? Would the effect be stronger than the cause? We don´t know the origin of quantum excitations so-called waves-particles.
But go on! When the collapsing star begins to approach the black star stage (called a black hole) the energy of the compressed radiation will be greater than the energy of the star itself, and on further compression the enclosed radiation will have such a short wavelength – almost infinitesimal – that its energy will be greater than the energy of our entire universe. See below
All we need to do is fit the upper equations mathematically. Not to mention the energy of the collapsing star. By the way, time slows down dramatically in the immediate surroundings of a neutron star. So a star, if we respect general relativity, cannot, in the time of our universe, collapse into a black hole. From our point of view, the collapse “freezes”.
Gravity needs space. Just like every field excitation, just like every property needs space. Without space, there is neither matter nor time. Just as time does not exist outside of matter and space.
Thus, it is at most likely that a collapsing star will return to its initial state – the basic fluctuating excitations of an omnipresent quantum field. But verification is beyond our limits.
… to be continued
See below – brief remarks waiting for word processing to articels
Download |
Download | Download |
Download | |
Spring_collisions | Thermodynamics | Energy_conversion | gravity_contemplation | |
Origin energy_I | ||||
Origin_waves | ||||
The post Transient states in Physics appeared first on ROMANVS Roman Mojzis.
]]>At the beginning of the Universe, there were no elements as we know them today. It had taken some time for the first atoms of elements such as hydrogen and helium to form, and even longer for the other elements of the periodic table to be created inside stars. After that the elements join together according to bonding laws to form molecules in chemical processes on which all further developmentis based. The occurrence of various compounds provides clues about natural and human history. There was certainly no artificial bronze in the Tertiary, but there has been since the Bronze Age. For much of human history, there was no steel, but now it is ubiquitous, as are transistors and integrated circuits, which have existed only for the last few decades. One day they are likely to be unearthed during archaelogical excavations by our descendents or perhaps other life forms altogether. The same applies to musical compositions and works of visual art, the oldest of which are only thousands or tens of thousands of years old. We can only learn about the world through our own experience, which is sometimes painful but often brings fantastic results. The basic state of the Universe, of which our world is a small part, is quantum foam – pure randomness in every detail that is completely unpredictable in any way. So how is it possible that it contains stable structures such as particles, molecules and organisms, including human beings? Inevitably, such structures get eroded away by the surrounding chaos. Consider the fate of an abandoned ship in a storm in the middle of the ocean. It is only a matter of time before it sinks or runs ashore and becomes a wreck. Similarly, a vacant house without heating and repairs will become derelict after a few years and will gradually be destroyed by rain, wind, frost and heat. And the same principle applies even to the human body. Consider the regeneration of the liver cells and other healing and maintenance bilogical processes without which we couldn’t live for very long at all. To conclude, the existence of organized structures like particles or organisms, is only temporary. For them to exist for long periods, some external force must maintain their organization by counteracting entropy.
Interesting evolution of the universe – out of the violence and roughness come such fine structures – plants, organisms.
The violence and roughness are meant by primordial states of universe and sharp oscillations in form of matter. Fine structures mean so most gentle organic structure like human brain, plant blossoms, flowers, butterfly pel and so on.
What is time, what is life? It is not possible to separate the origin of life on Earth from that of the Universe. It is not possible to distinguish the evolution of life on Earth from the evolution of matter in the Universe to the conditions for the “formation” of life. In fact, life manifests itself in the beauty of minerals or in structures of glass. However, we can see in the sky similarities of certain cloud structures to graptolite slates, or cracked glass with beetle grooves and much more.
The science is about limits, about bonding shapes, about different shapes and structures with common characteristics – briefly about discretion. Without differentiable shapes there is no science.
Scientists argued about differences among species. Where to put their boundaries. It could be better to think what discreteness in species does mean. What is the meaning of the fact that Continuously Nature is bounded to discreet appearances like species in biology? Biological species do not continuously change from one species to another. E.g., an apple doesn’t change continuously into a pear. See a figure below.
pear apple
Formulated graphically – we can see two curves of frequency distribution of common characteristics of apple and pear. Apple P(M), pear P(N). See next figure below.
It is clear that biological species are the result of quantum behaviour at the micro level.
See a figure below – a “sharp” line spectrum of radiation emitted from heated atoms of various elements such as hydrogen, sodium, potassium, iron, etc. Such “sharp” spectrum has probabilistic distribution of different wavelenghts – marked by greek´s lambda.
To be sure, all chemical elements are a result of the Pauli exclusion principle. Naturally, all substances formed from chemical elements, whether living or non-living, are also a result of exclusion principle. This behaviour is inflated to large dimensions in Nature. Not only biological species, but also species of minerals, liquids, solids and gases. However, water does not continuously change into oil or hydrogen peroxide.
Water is clearly distinguishable from hydrogen peroxide, let alone oil. Just as an apple is clearly distinguishable from a pear. If not at first glance, on a closer analysis we can see a clear difference.
Without quantification, without limitation, without quantum behavior, no recognition is possible. Recognition of different marks. The basis of theory of sets are elements. These elements must be differentiable. Very well differentiable. If there is no differentiability then there is no set, no math, no science either.
See the next figure below – resolving power between two peaks
These two peaks must have a certain minimum distance from each other. Otherwise they cannot be distinguished. In optics we know this as resolving power.
Without resolution there is no measurement, without resolution there are no laws whether mechanical, thermodynamic or natural at all.
The law of conservation of energy is valid only for isolated space like second law of thermodynamics. The sum of total energy before the experiment in a isolated space is equal with the sum of total energy after the experiment. Where there is the isolated system in the Universe? I mean the completely isolated system without any interactions with its surroundings. Such ideal isolated system really does not exist in the world like the ideal gas or like the ideal point or line or cube or anything else. Not to mention, the differences among isolated, closed and open systems. Where boundary conditions may change – e.g. changing an isolated system to a closed system. Where there will only be an exchange of energy without an exchange of matter.
It is impossible to make ideal closed space in Nature. Especially when we don’t know the origin of quantum foam and where did matter came from. E.g. the ideal coordinate system – we pretend rectangular coordinate system like a squared paper. All squares are the same. But the space without matter has no sense. And matter is responsible for curved space including the coordinate system, either. The empty space without visible matter is full of fluctuated elements (particles and antiparticles). The grainy structure – like untuned TV screen .
It is very interesting how several germ cells can develop into such a complicated structure as an organism, whether plant, animal or human. Yes, DNA structure is clear but it also depends on the influence of the environment. So does the universe. From the first few forms, the universe has developed into a very higly complex structures including organisms. And what about the environment of our Universe?
Self-organization theory – the reciprocal movements of a complex system controlled by the laws of non-equilibrium thermodynamics. According to this theory, the system can “spontaneously” organize itself if energy flows through it. In other words, if we have energy differences and a chaotic system among them, and we start the energy flow given by energy differences, then the initially chaotic system will start to organize itself into higher orderly predictive complex structures. This is verified many times not only in thermodynamics, physics, but also in chemistry or biology.
The proven theory in practice many times. But it adds more questions to the origins of life and the origins of the universe than it explains them. On the one hand, the requirement of an energy differences. Secondly, an impulse triggering the flow of energy. Furthermore, the regulation of the flow of energy and, above all, the origin of energy, or what we call energy. We don’t get away with explaining that energy is the ability to do work. Very reduced – energy is given by the frequency of oscillations of electromagnetic waves E = f, when we give Planck’s constant equal to one. The higher the frequency the higher the energy.
Thus, the energy flow occurs between two or more different frequencies. Roughly, we can imagine two compressed springs that differ in rigidity.
It is not enough to have only two or different frequencies, but we have to have very many dissipated particles between them, entities that somehow have to interact with the flow of energy. And here comes the question of the size, number, and proportion of the entities to the size of the original energy difference, that is, the frequencies already. Furthermore, it is a carrier of frequencies. We no longer have ether, but a quantum field full of vacuum fluctuations. So we have pure chaos. Then there is the question about two different frequencies – their origin in an environment like a chaotic quantum field.
Then there is the existence of laws that define the behavior of different frequencies. Why more frequency depresses less one and not the other way around. But it also depends on rigidity. An example from mechanics – however compressed a spring with low rigidity does not overwhelm a very low compressed spring with high rigidity. No matter how compressed the clockspring does not overcome the uncompressed spring of the car’s suspension. So much for distributing the properties of a quantum field. It’s impossible to describe a chaotic quantum field.
In the end, different frequencies can be modeled on clay, and nothing happens. The clay must be living. Only when we start moving the clay will something happen, but if I don’t want a chaotic behaviour I have to issue regulations, set limits. And where the limits and regulations for so-called self-organization come from? Regardless, in an environment like the chaos of quantum fluctuations, nothing can stay stable. Compare this with the dissipative environment of corrosive acid against a piece of cloth. In other words, in an environment like the chaos of quantum vacuum fluctuations, we still have to recover again and again through the creation and annihilation of particles, after that to hold the validity of laws and regules among particles and antiparticles with waves with given laws – let’s call them electromagnetic waves. And then consider classical particles, as so-called frozen energy or better as excitations of the quantum field.
The laws of nature as we know them were not around just after the creation of the universe. Or at the moment of creation. The question is what that moment is and how long it lasts. See the theory of no beginning of the universe, but of the random expansion of one fluctuation of the quantum field into its present cosmic form. So the laws of nature, not only physical, chemical (especially organic chemistry) but biological, sociological were sort of condensed in an initial state of the universe of which we don’t know how long it lasted. We don’t have a scale. The atomic cesium clock as we know it today did not exist. But their idea has been condensed into primordial forms, as have their creators and users. There’s no telling what’s hidden in the quantum field. So let’s keep exploring, so we have something to look forward to.
Actually, the whole world as it is, including all biological species and the beauties of nature through the history, including all works of art, music, all of that was condensed in the primordial beginning? The origin of the jet loom, any inventions, musical or artistic works, ideas, technological processes, etc. This was all condensed in the early days of the universe? Or else? I had forgotten the idea that our universe was about the mass of a bag full of potatoes at the beginning, and the rest was built up during the expansion. The law of conservation of energy is right – the negative energy of gravity versus the present enormous mass (energy) of the universe. So it balances out like a scale. My remark is not mocking, diminishing the degree of knowledge reached. Because one has to formulate an idea, law or equation based on the available facts and then see how it agrees with further observations. We formulate new facts at the risk of making incorrect conclusions. But that’s progress, we know which way is wrong and we try differently.
The reality or our ideas with models? Which do we prefer? The reality of the universe gradually revealed by us, which guides us. Revealed at the cost of a painful search followed by immense joy from understanding and applying new knowledge.
Simplification, mathematical abstraction has its own limits. The limits of the actual observable state. It is not possible to describe the curve of a tree trunk – marked in orange in the image below.
Some simplification is possible – an approximation to the ideal state. This is followed by a mathematical model. See closer and note the small bud on the tree trunk – next image below.
A whole branch can grow from it, or there was a branch. It doesn’t matter. But it doesn’t matter that this branch can influence the whole tree and therefore the whole situation. Just as from the smallest quantum the whole universe can arise – hardly then to be neglected! Let us return to the bud. The bud is included in the orange curve shown in the next figure.
Still, we can’t determine what will be or what was without further observation. Let us finish by noting that the actual reality is so multivariate with unpredictable influences that it is impossible to realistically describe the past or future based on the current state more than corresponds to multiples of the current state. Nothing against idealized models, but they are only models with limited validity. It is not possible to make long-range predictions based on such models that don’t work out, and then blame Reality (or G-d) for not being fair. Our superficial doing is not fair.
Our predictions based on idealized models have decreasing value into the future without further observation, without further correction. So does the deduction of what was in the past.
Reality is consistent, without contradictions. But our knowledge is limited. There will always be contradictions in our knowledge. Precisely because our knowledge is limited, and always will be, there will be contradictions no matter how slight or how great. It’s a bit like Gödel’s incompleteness theorems.
Briefly and roughly – K. Gödel proved in 1931 that for every infinite set of logical statements S it is possible to construct by the method of this set S such statements T whose truth cannot be proved or disproved by the means of the set S. At the same time, Gödel proved that it is always possible to supplement the original set S with a new set of statements U so that the new extended set S1 = S + U gives the possibility to prove or disprove the truth of all statements T. Unfortunately, in the expanded set S1 there will be new undecidable statements T1 whose truth cannot be proved or disproved by the means of the set S1. Thus, we must again expand the already expanded set S1 with a new set of statements U1 so that the newly expanded set S2 = S1 + U1 gives the possibility to prove or disprove the truth of all statements T1, but …….. and so we can continue indefinitely.
In spite of the above, there is the certainty of the level of knowledge that has been reached, a repeatable experience that continues to grow if we want it to be so.
How to discover and formulate natural laws? For example, the law of gravity. Only by observing of stabilized processes. Based on observation of planetary orbits and further abstraction. The planets are replaced by ideal points of certain mass. Or the size of the planet is unimportant in relation to the distance from the sun. We can solve the orbital periods with sufficient accuracy. But the situation is quite different in the case of dense nebulae, forming planets – planetisimals. Differently shaped masses. See Fig. below
Abstraction in this case is absolutely impossible. Sure, we can divide the shape of matter into infinitesimally small objects. These objects are subject to mutual attraction according to discovered laws on the basis of so-called ideal bodies. Differential calculus helps us only in the case of clearly definable shapes. In the case of purely random shapes which, moreover, change randomly, there is nothing to do but observe. See Fig. below
Conclusion: It is quite impossible to determine the age of the solar system based on a derived law of gravity from the current “stabilized” shape of the planets. There is a contradiction here. It is absurd to determine the orbital period during the formation of the planetesimal of the later Earth and to measure the entire formation period of the solar system by this period on the basis of the hypothesis of the origin of a stabilized solar system from a dense primeval nebula.
So how do we determine the formation period of the solar system and, moreover, using our ideal tropical year? Use a scale other than gravity. Use a quantum scale. Use the decay of atomic nuclei. To use a half-life value of atomic nuclei to decay. The half-time is the time it takes for 1/2 of radiocative nuclei to decay. We obtain the rate of decay. Roughly written – the tropical year or parts of it (day, hour, second) are compared with the value of the half-life of suitable radioisotope. We can then determine the formation time of the solar system expressed in years. And years as we know them today are completely useless in the time of formation of planetesimals.
There has been a change. Instead of gravitational effects, we measure TIME using quantum effects – the decay of atomic nuclei. But that’s not such an ideal solution. Better written it is an ideal solution, but it is not realistic, it does not correspond to reality. In short, we’ve replaced one scale, gravitational time, with a second scale, quantum time. Missing connection between gravity and quantum. Missing the unified theory supported by experiment.
Try to solve gravitational effects (trajectory, time, velocity, etc) on the arbitrary selected “ball” in the case of the following environment – See Fig below
Especially if such environment is indefinable changing. To use gravitational laws? Impossible! To use quantum mechanics? Impossible, also.
A very fundamental question. The limits of the law of gravity in a quantum field. When did the law of gravity begin to work? What conditions must be satisfied for it to start work? The meaning of the function of the law of gravity was hidden in the primordial forms of the universe? Or is the existence of the law of gravity purely random? And under different conditions of primordial forms, would there be a different law, or different laws? Primordial forms of the Universe like DNA in biology? Hidden properties or laws of matter? In other words, later properties of matter are hidden in the primordial forms? Or on the contrary, the forms and laws of matter are the result of further evolution – the influence of random fluctuations of primordial forms – disturbances and inhomogeneity of vacuum. By matter I mean stabilized resp. renewed excitations of the quantum field in the form of quarks, elementary particles. Moreover, the law of gravity needs inertial mass – bound mass (energy), such as particles with so-called rest mass.
By the way, polar bond in the plasma state is not possible. So are other chemical bonds – covalent, ionic, etc.
Allowed combinations of chemical bonds – e.g. methane CH4, but HC4 is not allowed. The allowed and disallowed states are determined by the “discrete” arrangement of the electron shell of the atoms – Pauli exclusion principle along with Planck’s constant. And again another question. When quantum laws began to exist – thus the discrete arrangement of matter?
Imagine a omnipresent quantum field – chaotically arising and disappearing vacuum fluctuations. Everything changes over time. But Time has not yet been defined! Time has meaning only with matter. By matter we mean the exciations of the vacuum field into various wave forms – quarks, electrons, photons, shortly particles that are characterized by mass. These grouped excitations of the vacuum field called matter particles (quarks, electrons, photons) are constantly being renewed. These particles are free or bound. Free particles like photons do not have a so-called rest mass. Bound particles have rest mass. Bound particles are grouped into atoms and these into molecules – the basis of all matter in the universe. Time, or the sense of time, as we perceive it, arises only with the formation of matter, respectively with renewed excitations of the quantum field in the form of the bound particles, i.e. particles with rest mass. Photons do not have time – all their energy is free. Electrons, protons, neutrons, i.e. particles with rest mass are characterized by what we perceive and measure as time.
Very interesting question – thickness of the biosphere on Earth.
Not to mention the very big empty surrounding universe, but very limited space of our Earth. So much empty space relative to such a small area of the globe. In terms of the occurrence of life – the surrounding empty universe and such a small area of our Earth. So much empty space in relation to such a small area of the globe. The thickness of the biosphere is estimated to be around 8 to 10 km. A few kilometers in the sea and about 5-6 km in the air to the mountains. Living organisms are found within this range – Flora and fauna. But fauna – animals depend on fauna – plants or algae. The fact that the eagle can live at high altitudes is thanks to plants. Or rather, thanks to the smaller birds and raptors that depend on plants. It’s the same with deep-sea fish. Their food are smaller fish that feed on plankton or algae.
Let’s do a thought experiment. We have a thin non-translucent membrane that covers the variously textured surface of the Earth and the surface of the oceans and seas, lakes. We place this membrane a few millimeters above the surface of the land and waters. What’s going to happen? All plants, algae and lichens disappear. Many animals survive even at high altitudes. Likewise, a lot of deep-sea fish won’t notice the change. So the 8 km biosphere will still be inhabited. But after a few days or weeks, life will disappear – the biosphere will be no more.
The same would happen if we were to place the supposed impenetrable membrane a few mm below the surface of the oceans, seas and variable textured surface of the earth. On the surface no change, but plants, algae would not exist after a few days. As a result of the food chain, there will be plenty of animals both on land and in the seas for several weeks.
Conclusion: It is well known that all life on earth depends on green plants. But green plants have stem, roots and leaves. In short, these plants combine two completely different environments. And they can only exist in a combination of these two environments – air and solid or water in hydroponics or aeroponics.
The transition of the two different environments is a necessary condition for the formation of the plant and the algae. The transition of two different properties is the source of some interesting processes. The source of the effects is the transition. Regardless of the area then affected. See electronics, the transition between different semiconductors. An almost infinitesimal transition if we reduce millimetres to micrometres.
Mathematics has its own limits. These limits arise from the very nature of mathematics. For mathematics to exist there must be distinguishable events in global space and time. These different events are products of local changes in global space and time. Try to mathematicaly describe the image below – really hard work or absolutely impossible when the shapes change continuously
which means that local different events (shapes) must be distinguishable to each other for a long time. Only then can maths get to work – to differentiate events, to sort and name different events into sets that have common attributes. Sets are grouped different events with common attributes. After that mathematics is able to count these sets – to add them, to subtract them, to multiply them, to divide them, to solve equations along with graphical analysis, derive, integrate, solve differential equations and much much more operations. Mathematics predicts totaly identical sets or subjects or processes or anything else. Otherwise it’s not possible, it wouldn’t be mathematics. See the basic equations 1 + 1 = 2. But we know that Nature is alive and ever-changing and that no two events are absolutely identical. The same is valid for a group of events called sets.
Briefly to remember – Maths is the result of human abstract thinking. See below
In the upper image there are six or eight differentiable shapes. Human abstraction has been to condense these shapes into so-called ideal points. The ideal point has 0(zero) size. But zero size is impossible in the world. Just like the ideal line or curve or plane. These idealities are only our abstract projection. The real world has a foggy (probabilistic) structure. Very evident in the particles that make up our ordinary world.
Not to mention contradictions – it’s hard to construct anything of a certain size from zero size – whether a segment on a line or something else. There is no chance to connect ideal point with the ideal line. Btw, the ideal line seems to be like the ideal point if we look at it in the direction of its axis. As well as the plane seems to be like the ideal line if we look at the ideal plane in the direction of its edge in case it was possible.
So ideal points or ideal lines or ideal systems, yes – why not, but they must be subordinated to the real facts of nature around us. As well as to respect changes in the base units, which are certainly not ideally constant.
What is accuracy? The greatest number of oscillations? The more oscillations the greater the accuracy? In the same way we have to verify the movements of the billiard balls after N-collisions. Tt is not enough to predict that after 1000 collisions the balls will be placed so and so. However, the calculation accuracy is limited. After N collisions between the balls, the next direction of the balls movements will be in the range of +- 180 degrees. Pure chaos. Not mentioned the accuracy of Planck constant and Gravity constant is limited, like the speed of light. And it doesn’t help if we make them fixed values.
Do the scales or rulers change as well? The rotation time of the Earth or the frequency of oscillation of the cesium atomic clock. We only expect constant rulers over the time history. Only in the long period average we suppose identical fluctuations. But neither organisms nor nature itself are the identical. Each organism of the same species is slightly different to each other. But over geological epochs the changes among organisms are very extreme. See the differences between the Devonian flora and the plants of today.
How to validate, how to measure ever-changing events, shapes, structures or forms at all? E.g. to use base form. Let´s have a very good example – the topography, resp. the mapping. The common basis of the mapping effort there is a triangle. The best tool how to measure and validate the earth´s surface is the use of trigonometry in geodesy.
To accurately measure a single length on the earth’s surface, perhaps on the order of x hundred metres. And then just measure from both ends of the marked length two angles to the top of the highest mountain, which is x tens of kilometres away.
But not only geodesy but also in astronomy – to find the distance of the nearest stars by using twice the radius of the orbit around the sun (see astronomical unit). See the definition of parsec.
The triangle is the best shape in the world. Other shapes like squares, rectangles and polygons can be constructed from trinagles and not vice versa. Let´s write down some interesting properties of triangles. See below
There are three sides and three angles. The longest side is called the hypotenuse. There is a first rule – the sum of distances of two sides must be greater then the distance of the hypotenuse. There is also a next rule – the longest side (hypotenuse) of a triangle is always opposite the largest angle. The smallest side of a triangle is always opposite the smallest angle, and the middle side of a triangle is always opposite the middle angle. If all sides are equal then all angles are also equal. Or if two sides are equal then two angles are also equal. See below
There is a special triangle called right-angled triangle. This triangle is the result of a rotation of the hypotenuse in the unit circle.
Triangles must be submitted to real world and not vice versa. Imagine situation of part of Nature – rocks with valleys and to measure them, their their retreat or approach and to estimate how were conditions in early stages, without verification. Similarly, mathematics must be subordinate to Reality and not vice versa. See the image below – Worldwide Geodetic Network
The location of the continents is changing. Where to find a fixed point or fixed distance in the network shown above? What to choose as the basic unit of distance? Perhaps the longest distance on land? But even that changes, although less in comparison to the drift of the continents. We have the locations of the continents millions of years ago. See A. Wegener. The connection of South America with Africa, etc. The evidence is clear – paleontological findings and geological formations. But how to verify the location of pangea or gondwana? This can only be roughly estimated. There is no verification like in the case fo upper network, where we can use satellites, lasers or radars to verify and correct the base distances.
So we discovered that the shape of the earth is indefinable – from a distance like an ideal sphere, up close like a geoid flattened at the poles, and very close up like a pear-shaped hemisphere with the northern hemisphere smaller than the southern hemisphere.
Not to mention that the earth’s surface is constantly changing, the dynamic effects of the lithosphere on the earth’s crust, etc. There is no chance to find an absolute fixed point or distance on the Earth.
The purpose of the following highly simplified model is to replace the Earth’s geoid with a flat “geoid” or a flattened ellipse with shifting and changing surfaces (continents). In short, we will deal with a trigonometric network in an environment of local and global changes. So-called rubbersheet geometry.
… to be continued
The post The meaning of abstraction in Reality appeared first on ROMANVS Roman Mojzis.
]]>Before the actual measurement or control, it is necessary to mention the limits of mathematics.
The real world is always determined by a range of properties. A range of values – tolerances. Every form and event in the world is always changing. Even if we have a precisely manufactured rod or ball with the highest possible accuracy – yet the diameter of the ball is not constant, but varies within a certain range. Not only the range of manufacturing tolerance, but for example the thermal vibrations of molecules. These cannot be stopped.
In short, reality is always foggy. There is always a probability range – an unpredictable value in a given probability range. We do not and can never know the exact size of a ball or any other manufactured product or the realities of nature.
But Mathematics supposes stable (invariant) numbers. See base units. However, mathematics cannot be different by its very nature – the numbers 1, 2, 3, … , until infinity. The number 2 represents the sum of two equal (identical) values 1 . Mathematics, even if it calculates with real numbers, must round them to a limited number of decimal places. Such mathematics cannot be mindlessly applied in the real world, where no two entities are the same and moreover, they’re changing. Mathematical equations are valid for an ideal world, a world with ideal given changes. For a world removed from Reality, i.e. a dead world. And that’s what everyone who tries to evaluate the world with this idol look like. Mathematics is like language – only to direct us, to lead us to inexpressible and changing Reality. Nothing against equations, they are unmissable, but they must be subordinated to Reality and not vice versa.
The agreement of mathematical equations of idealized models with reality – see crank mechanism or pendulum. Likewise thermodynamics and many other examples. Many scientists were then literally fascinated by the mathematical possibilities which led them to explain or predict events in areas that could no longer be verified. See Archimedes, fascinated by the power of the lever to move the Earth. Which is theoretically and practically impossible.
Ideal models are valid, even 100% valid for given ideal conditions. The validity of the prediction is limited by the computational accuracy. So nothing against ideal models determining absolute temperature, the expansion time of the universe, or the position of the crank mechanism. One can take into consideration a real crank mechanism – determine (scan) the dimensions of the connecting rod, piston and other parts, determine the structures of the materials used, the locations of the greatest forces and thus the locations of breakage or wear. At the point of sliding movement – the greater the tightness the greater the accuracy but also the greater the wear and the greater the freedom in sliding the less the accuracy but also the worse the wear. Precisely because mechanisms, like all processes in the universe, are subject to wear. Even if we determine the point of highest wear, we cannot determine the exact location of the wear and its progression. It will always have a probabilistic course. In other words, a random, unpredictable element will enter into the idealized real mechanism, or predictable, but within a certain range. This range will be many orders of magnitude away from the original accuracy of the calculation (scan of dimensions, positions of interacting parts, etc.). Thus, the original high accuracy of the idealized real mechanism will be fatally broken. The accuracy of the prediction will be completely off in this case as well.
For an idealized crank mechanism from an ideal steel, we can predict for millions of years or billions of years with sufficient accuracy. It is not possible to predict the position of a real crank mechanism for x-turn, let alone for x-days or years. We don’t know what will happen. The same is true for all mechanisms. No two steels are exactly the same even if they are in the same category under the same code. The same code provides validity of mechanical properties to 98% or 99.98%, but these steels are very expensive. The greater the simplification from reality, the greater the illustration and clarity, but the shorter the validity of the prediction over time. Not to mention – weather forecast for ½ year or 10 mil. years. We know „exactly“ the weather on Mars, we „exactly“ know processes in the beginning of our universe, but we are unable to predict weather report more then several days. Not to mention so-called three body problem in mechanics.
E.g. a projectile movement in various unpredictable chaotic environments – see the foggy background of our world, different atmospheric conditions through the path of the projectile – to predict the future path or estimate the past one at a given accuracy, it´s a big problem. However, in ideal conditions, where there is no atmosphere, it is quite easy. That’s what gunners would like, perhaps on the Moon.
Conclusion: The projectile must have a control system to correct its trajectory. Similarly, theories, no matter how “perfect”, have to be verified, otherwise they are not very meaningful.
Measurement and control is necessary in the real world. Every process in the reality must be controlled. And measurement, including follow-up control, is based on observation, on the use of measurement – the comparison of base units with measured values. Permanent verifiability. Process control – the basis for everything – see Adaptability, Complexity, Proportion, Reciprocity. There must be limits in the range of the control. The range of control means the possible states, the deviations, etc. The range is given by detailed calculations. Without these calculations no process control is possible. It is absurd, on the basis of idealized models, to predict the operation of a crank mechanism for hundreds of days or years, or to predict events in astrophysics for billions of years without observation. Nothing against idealization and initial calculation, one has to start somewhere, but one has to know this and go further according to experience with Reality, verifiable Reality.
Where is the right value N? We are unable to measure directly right value N of something – diameters of balls or anything else. We are only able to measure a certain amount of values that, in effect, when plotted, resemble Gauss’s distribution curve of random errors. In other words, we consider the peak of this curve to be the right value N.
Measured values oscillate at interval N ± d over the right value N, which is irrational (see rational numbers oscillate around irrational ones to infinity) and, crucially, the indescribable right value N keep changing – it’s “alive”.
The prove – the Gaussian distribution curve also oscillates, slightly, through the time, when we take several sets of measurements of the value N – e.g. a diameter of a ball. We could say “Probability of probabilities”. We don’t measure all values at the same time, but we measure sequentially over time. Each value has its own time when it was measured. The measured values change – they oscillate with the frequency of their appearance according to the Gaussian curve of the frequency distribution of the measured values. Each value differs both in value and in the time at which it was measured. It cannot all happen at once for a matter where there is a difference between time and distance. That would not be matter then, but radiation, where there is no difference between time and distance.
Let´s go to the emission of light. E.g. the wavelenght N of emitted light from hydrogen atom. Specifically, the line spectra, so characteristic of each element from hydrogen, helium, through sodium, etc. The line spectrum needs to be taken realistically and not strictly mathematically. There is no ideal line without thickness in the world, just as there is no ideal point or ideal circle. Reality, then, is always a bit of a blur. It follows that our measurements are inaccurate. Nor is there an absolutely sharp scratch of the measuring instrument. Thus, even the so-called sharp line spectrum of emitted light has a very small yet fuzzy nature. It is best expressed again by the Gaussian distribution curve.
We will never measure two identical values at the wavelength level of light – see orange lines in an image below. Light is part of an ocean of quantum field full of random quantum fluctuations. Never repeatable. The idea for theory of dissipation or irreversibility.
We take the distance between two peaks h of the two curves for the value of the “correct” wavelength. It should be added that there is no exact value of the wavelength of light emitted, for this nature – a kind of blurism is inherent in the nature of real nature in and around us. So it would be better to no longer call this attribute, in the form of a mathematical notation and graph, the distribution curve of measurement errors. However, this is a natural characteristic of real nature. We also don’t call the error the movement of molecules or we don’t call the error the range of the oscillating movement of the spring. Would we then call the movement of an elephant or a human in a restricted place the error? However, it is meant to sit in the middle and not walk at different distances from chair to room? Or shall we call pregnancy a disease?
It should be remembered that the calculation of Planck’s constant was done in the above mentioned way. The same way as the calculation of the gravitational constant or the speed of light. There have always been a range of accuracy in the calculated values. Now the calculated values were taken to be invariant. But in reality they are not constant in a changing world. The point is that simplification through the fixation of fundamental constants can bring later complications.
Certainly the logic of fixing the base constants is clear. The constant is fixed to a certain value and the other values change with a much larger tolerance range. This is exactly what makes this method misleading. Even a unit is subject to change and these changes (tolerance range) cannot always be included in derived or interacting variables.
When we measure the dimension of an object, we measure it with a certain final accuracy. The final accuracy is determined by the sum of the accuracy of the measuring instrument and the accuracy, stability of the object to be measured. See below
Resolution versus regulation. It’s impossible to regulate processes that we can’t distinguish. For regulation, we need a distinguishable impulse. The more precise the scale, the more precise the regulation. However, there is some regulatory limit, or limit of distinguishability, not only for optical instruments in terms of diffraction, or the thermal movement of molecules, but above all the fundamental law of quantum mechanics – the Heisenberg uncertainty principle.
What can I say? When we want fine regulation we have to make fine differentiation. But that’s the trouble. Fine resolution requires short wavelengths for the observation process. See microscope resolution – the shorter the wavelength of the light used the greater the resolution. In other words, we see more details under blue light than under red light. If we use ultraviolet light or an electron microscope, we can very finely resolve even the smallest details that were previously indistinguishable. But there’s a problem. And that problem is the energy of the radiation used. At the beginning, we wanted fine control, so we want finesse, too fine. But to get fine resolution means using high-energy radiation. And high-energy radiation will greatly influence the object or the event we observe. So what we observe will have nothing to do with the initial event. Roughly speaking, if we want to work finer details in practice, we will have to use a stronger hammer. And that’s incompatible with finesse.
So, there are control limits from which we can get a certain optimum for a given control procedure together with the surrounding conditions.
Let’s start from the ground up. In the view below you can see an optical system made up of a lens with a certain aperture diameter. The lens has a focal length f where all the rays are focused. See below
But beware of singularity. All radiation with a certain density will join together at one point, where an infinite density of radiation will then be. Yes, this is a nonsense. There is no ideal point, no ideal lens, no ideal conditions. Yes, it is possible to fire a paper or a cotton by lens with focused light, but without singularity.
In the upper mentioned system we will metion especially about the resolution and depth of field, or sharpness. The resolving power, like the depth of field, is related to a certain minimum size, a certain minimum quantum. Either the size of a single sensor element or the wavelength of the radiation together with a given aperture number. See below the image with the minimal size of the sensor element. E.g. one pixel marked red.
We can see that the size of one scan pixel influences the position tolerance of the whole sensor like CCD, etc. Depth of field as well as sharpness is determined by the sensor size, aperture diameter, focal length and wavelength of the light used. See below the tolerance
See below the tolerance of the complete sensor placing is given by twice the df. The smaller the pixel the smaller the depth of field at a given aperture. Then if we want to keep the depth of field for smaller pixel we have to increase the aperture number, i.e. decrease the aperture diameter. Limit state – When the aperture diameter is equal to the pixel size, then we can place the film or CCD sensor anywhere.
to be continued …
Download | Download | Download | Download |
Resolving_power_regulation | Measurement | ||
The post Measurement and Control appeared first on ROMANVS Roman Mojzis.
]]>