As someone who works decently close to, but not in, this area, I am surprised to see this on the front page of HN. The paper authors do not use correct statistical practices (e.g. H_0 cannot be fixed “as a nuisance parameter” to remove a degeneracy with another parameter - nuisance parameters must be marginalized over!) and the authors fail to account for several effects in their model (e.g. stretch/color factors for each supernova must be varied) which are known to be necessary for robust inference of cosmological parameters from supernovae data.
This is an honest question since I have seen this phenomenon occur a few times now with cosmology/astrophysics papers on HN: How did the original poster find this? And why has it gotten such interest/points?
I sincerely hope it is simply a well-intentioned interest in our universe (which it greatly heartens me to see!) combined with naïveté (not meant pejoratively, just to refer to lacking context) wrt the technical nature of this work, but I am interested to hear your thoughts.
Maybe it's some deep cultural hangup the CS people got at university, there's certainly too much "I could have done physics" in the top personas of tech for it to be pure coincidence.
I saw it floating around Twitter with some expansive commentary "This could be an incredible revolution in Cosmology. The Dark Energy model of the universe, which won a Nobel Prize in 2011, may be completely wrong.", etc.
For whatever reason, engineers really hate dark energy and will glom onto any fringe theory that appears to disprove it. Not to psychoanalyze too much, but it seems to be a topic where non-experts get to feel like they're smarter than those PhD cosmologists because they watched a Sabine Hossenfelder video.
See literally any thread about dark energy (or dark matter, which elicits similar reactions) on HN.
I have mentioned this in two comments but HN gets really kooky when it comes to cosmology. A few years ago i saw a barrage of highly upvoted papers on MOND and de Broglie stuff.
It really made me wonder what else gets posted here that is patently absurd but i dont have the prerequisite knowledge to filter it.
That is probably the same phenomenon that makes the most sketchy papers on nutrition science to appear on popular news.
Sketchy things have the most interesting results. People that want entertaining news select for interesting results, and the sketchy ones get over-represented.
* I have seen incredible incompetency in my field. I have no illusions that it is better in other fields (plus personal stories from others about their fields).
* Replication crisis. Generally speaking, it threw a massive shade on a lot of fields.
* Generally speaking, the "trust the science" message during COVID did a massive damage.
* Science news influencers. Dark energy is one of common themes of Sabine Hossenfelder and generally rather negative.
* Dark energy is supposed make up ~70% of everything in universe. Yet basically nothing is know about it. That's a hard sell.
And yes, every point has a massive asterisk/that's a massive simplification/incorrect. That doesn't help overall perception. Perception is everything on social media.
Basically, if there was a theme, it would be "WTF are they doing?" Or if you want a more poetic version "Modern science has been a voyage into the unknown, with a lesson in humility waiting at every stop."
There is something seductive about an idea that instead of using existing stuff, scientists just made something new and shiny and were wrong. Happens in our field every day.
Your comment only reinforces this. This is a peer-reviewed paper in a reputable journal (Monthly Notices of the Royal Astronomical Society). Yet you are asserting a significant failures to use correct statistical practices. If you are correct, that is just another notch to the "WTF are they doing?" category.
A parenthetical remark just to clarify since I was reading quickly.
Wrt the second point in parentheses about stretch factors - upon a second look it is not clear to me exactly what is happening since they say the factors are both fixed for each supernova but then also list the (global?) stretch and color factors in Table 1 (implying that they are varied).
If this is the case then the paper is easily dismissed, correct? How is it being published if the flaws are immediate, obvious, and completely destructive to the idea being presented? I'm legitimately asking - I've published papers and reviewed them, and if I thought something was so thoroughly wrong, I certainly wouldn't give it the ok.
I can't speak to the technical nature of the work, as I don't work in the field, but the last-named author, likely the advisor, seems to be a respectable researcher.
> I have served on the committee of the International Society on General Relativity and Gravitation 2017-2022, and am a past President of the Australasian Society for General Relativity and Gravitation and a past President of the New Zealand Institute of Physics. I served 8 years on the editorial board of Classical and Quantum Gravity, 2012-2019.
So while your criticism may well be justified, you make it sound like this is a fringe paper, which I don't think is the case. So instead of opening a meta discussion about what physics papers get posted & upvoted on HN and why (which is hardly novel), I'd be much more interested in how big the issues you're mentioning are and whether the results of the paper could be salvageable. I at least do think that Wiltshire's research is interesting and he has made good points[0] about the challenges of coarse-graining spacetime structures and where & why the underlying assumptions of LCDM might fail to hold.
I am certainly not suggesting anything negative about the character or reputation of the authors of the work. I think that work on alternatives to the accepted concordance cosmology model (LCDM) should definitely be explored, and David Wiltshire et al. should pursue this if they deem it promising.
As an aside: You may not find this compelling, which is understandable, but I will note that the vast majority of cosmologists (very conservatively 95%) do not question the FLRW aspect of the cosmological concordance model (which is what the paper here does away with in the alternative timescape cosmology), even if they question other parts of it (i.e. by considering dynamical dark energy, neutrino interactions, etc.). I agree that timescape is an interesting idea, but it seems like only a few people have been working on it for over a decade now - unless you have a very (and I think unfairly) dim view of professional cosmologists, if there was a strong case to be made for the timescape model based in data, the greater community would have adopted it by now.
Finally, and independent of the sociological points above, an answer to your question about specifics aside from the types of statistical/modeling issues of the type I mentioned above.
Again this is not exactly my area and I have not done the analysis myself, so I will refrain from strong opinions. However, my immediate reaction to this is that it is easy to fit one particular dataset (in this case part of the Pantheon+ supernovae sample) with a more complicated model than LCDM, but often these types of models fail when other cosmological datasets are included. Considering such joint constraints with multiple datasets is not an “extra ask” of alternatives to LCDM - to be taken seriously any alternative model should hold up in a joint analysis of precision cosmological datasets (see, for example, the tests of a more seriously considered alternative model in the community, early dark energy [0]). These days, this is often the combination of Cosmic Microwave Background (CMB) Anisotropies, galaxy Baryon Acoustic Oscillations (BAO), and one of several supernovae samples - see e.g. [1] for a somewhat pedagogical overview. The failure to fit several datasets is a common issue e.g. with many MOND papers. In this case, the authors do not try to fit data to anything but a single (modified) supernovae dataset. I would expect that if they included a fit to CMB/BAO data, there would be trouble with the timescape model, as this is exactly what was found in an analysis of the timescape model in another state-of-the-art supernova dataset [2] - there the timescape model could not accommodate the supernova data when combined with CMB/BAO data [2].
> However, my immediate reaction to this is that it is easy to fit one particular dataset (in this case part of the Pantheon+ supernovae sample) with a more complicated model than LCDM, but often these types of models fail when other cosmological datasets are included.
This is what I was afraid of. Thanks for the links, in particular for [2]!
I mean, probably. Though HN does have a taste for fringe theories, which might color your interpretation of "well-intentioned". And most of us aren't really qualified to assess the statistical rigor of astrophysics papers, myself certainly included.
For context: The top comment on most HN stories, especially research, tries to completely discredit the OP, often by finding flaws, and especially in statistical methods.
Everything has flaws. I think people are interested in what is valuable and possible. Shakespeare's work has many flaws, but that's not what people focus on.
Also, while you aren't responsible for all those other top comments, why should I believe yours? Usually I just ignore these comments (but I appreciate your curiosity).
Failures in statistical methods are not covered by 'everything has flaws'. Those are fatal, existential flaws. Something that is not likely true (though it is erroneously presented as likely of being true) are likely false.
> Something that is not likely true (though it is erroneously presented as likely of being true) are likely false.
That doesn't make sense to me. Because something isn't proven here, that doesn't make it more likely to be false; it's just uncertain. Poor evidence is not evidence either way. To say it's false, it would need to be proven false, with good evidence. If I assert, 'the universe is expanding because Pluto is further away today than yesterday', my argument wouldn't support the claim but that doesn't logically imply that the universe is not expanding.
> Failures in statistical methods are not covered by 'everything has flaws'. Those are fatal, existential flaws.
Why are errors in statistical methods -- if they exist here: we have a hot take by a random, anonymous Internet commenter (using a new account) against scientists who spent a long time on this work, and put their names and reputations on it -- somehow more fatal than other errors?
For example, some statistical errors lead to weaker results, but results nonetheless. Some lead to results with a somewhat different meaning. (Some lead to stronger results.)
We need to deal with imperfect information all the time and find value in it, or we would have almost no information. I spent last week solving a problem with several routers interacting; I had some clear data, some unreliable information, and some black-hole uncertainty; I had to work with what I had and solve the problem. The idea that science is exempt from that is a fantasy of non-scientists, of the religion of science.
This paper argues that the Timescape model [0] provides a better fit than the cold dark matter model when examining Type Ia Supernovae. According to the Timescape model, clocks run faster in voids where the gravitational field is less, and significant differences exist between a galaxy floating in a void and one like the Milky Way Galaxy. The Timescape model suggests that other models, which fail to account for these differences, lead to less accurate calculations and less plausible solutions.
Thanks for saving me time in dismissing this paper lol. Any time somebody wants to get rid of dark energy, i run into some garbage. Reminds me of the mond nuts
Just reading the rest of the comment section is enough to help me verify that.
For some reason, hackernews always gets kooky when it comes to this stuff.
I don't know, the evidence for dark energy has always seemed a lot sketchier than the evidence for dark matter. Dark matter has lots of interlocking lines of evidence. Isn't dark energy pretty much entirely based on various cosmic distance measures that all have huge stacks of assumptions embedded?
I agree. Until i see better evidence for 1a, wmap, and cluster formation in another theory, i really want all the charlatans to be quiet. We dont know what dark energy is, but we have decent evidence to say it is there and also decent theory.
I am not saying this paper is made by charlatan btw. This type of work attracts those people though.
If clocks run slower in the presence of gravity, wouldn’t it stand to reason it runs more quickly in a void where there’s less gravity? Or is the model saying that clocks run even faster in a void than Einstein’s theory predicts?
Clocks run at "normal" speed (i.e. "1x" speed) in the absence of a gravitational field. The stronger the gravity, the slower they run (i.e. less than "1x" speed).
This has always felt to me like evidence of a sort of computationalism. I am not a computationalist, but the thought is the "universal CPU" needs cycles for each particle. Mass is what takes time to process, so the voids experience no/less computational delay. This reads like the simulation author is messy and constrained, not godlike.
To me it's not about mass, but more like "maximum information density". There's a limit on the information density (rate of happening?), so when a parameter X changes too much, it affects other parameters -- they become constrained so that the total information density stayed within the maximum limit. That would indeed sound like some kind of computational limit if the universe was a massive CPU with constrained resources...
But I'm a layperson and I have no idea what I'm talking about :)
Right so is the paper saying that lambda CM completely ignored clock differences due to heterogeneity in mass distribution in the universe where isolated galaxies would be experiencing less time slowing than galaxies near other galaxies which would experience more time dilation?
In the standard cosmology the Integrated Sachs-Wolfe effect captures the redshift/blueshift of distant light sources (up to the Cosmic Microwave Background) as it traverses relatively dense regions and relative voids.
Note that in the next paragraph I depart significantly from the vocabulary that the Timescapes programme proponents have been using for the past twenty years.
ISW and comparable spectroscopy is easy enough to think about in terms of an accelerating cosmic expansion, i.e., relative voids are becoming spatially bigger with the expansion. It becomes much less intuitive how to fit the data if one keeps relative voids at roughly constant volume instead implying that there is a significant false vacuum above the ground state and in voids the false vacuum is slowly decaying to that state. (Outside the supervoids, near matter, this false vacuum decays much more slowly still). Because "vacuum" in the voids isn't really vacuum, one is stuck with a running function on the constant c (it gets faster with time from the formation of the CMB; this is because the false vacuum evolves towards a real vacuum) or adapting lightlike geodesics by imposing refraction (since the false vacuum is a medium).
The usual terminology is reasonably capture in the first paragraph here at <https://en.wikipedia.org/wiki/Inhomogeneous_cosmology#Inhomo...> ("Inhomogeneous universe"). The following short section ("Perturbative approach") is what is done in the standard cosmology when one wants to do detailed studies of filamentary distributions and other structures that are lumpy at some (larrrrrge) length scale of interest: the perturbed homogenous background is practically always the standard FLRW.
The justification for perturbation theory on FLRW is that even though there are dense spots (notably most galaxies' central black holes), principles like the Birkhoff theorem capture the idea that as you get far enough away from a galaxy it behaves more and more like a small shell, and this happens at intragalactic scales for these SMBHs: gravitationally, even to its arms' structure, it makes practically no difference whether Andromeda's central bulge has a lot more stars/gas/dust or whether it has one, two, or six central SMBHs (at enough spatial separation that they're not mutually orbiting in a way that would generate gravitational radiation our observatories are sensitive to).
The same idea applies to galaxies->galaxy clusters->filamentary structures: as you "zoom out" the density variations become less important: filaments are pretty sparse on average.
The Timescapes programe wants a sharper difference in matter sparseness between voids and filaments, and proposes that gravitational backreaction by the matter is responsible for generating that: the presence of matter steepens the density of matter over time (without the visible matter clearly becoming denser). I don't personally see how that's much different from a false-vacuum decay in the voids, conceptually. (ETA: well, it depends somewhat on how the Timescape void fraction evolves, but the local universe VF doesn't run void clocks fast enough, unless we do violence to the Copernican principle.)
Finally, I think the most important result of this latest Timescapes paper is a reminder to everyone that supernova data are a mess. A good X-mas present would be a couple readily visible Milky Way supernovae.
T CrB is a recurrent nova (RN), not a supernova (SN).
There is only a microscopic chance of the white dwarf member undergoing runaway fusion becoming a Type Ia SN. So microscopic it would be truly surprising astrophysically.
Pet theory is that our universe is run on some external computational substrate. A lot of the strangeness we see in quantum physics are side effects of how that computation is executed efficiently.
The inability to reconcile quantum field theory and general relativity is the that gravity is a fundamentally different thing to matter: matter is an information system that's run to execute the laws of physics, gravity is a side effect of the underlying architecture being parallelized across many compute nodes.
The speed of light limitation is the side-effect of it taking a finite time for information to propagate in the underlying computational substrate.
The top-level calculation the universe is running is constantly trying to balance computation efficiently among the compute nodes in the substrate: e.g. the universe is trying to maintain a constant complexity density across all compute nodes.
Black holes act as complexity sinks, effectively "garbage collection." The matter than falls below the event horizon is effectively removed from the computation needs of the substrate. The cosmological constant can be explained by more compute power being available as more and more matter is consumed by black holes.
This can be introduced into GR by adding a new scalar field whose distribution encodes "complexity density." e.g. some metric of complexity like counting micro-states, etc. This scalar field attempts to remain spatially uniform in order to best "smooth" computation across the computational substrate. If you apply this to a galaxy with a large central supermassive black hole, you end up with almost a point sink of complexity at the center, then a large area of high complexity in the accretion disk, and then a gradient of complexity away towards the edges of the galaxy. That is, the scalar field has strong gradients along the radius of the galaxy, and this gives rise to varying gravitational effects over the radius (very MOND-like).
Some back of the napkin calculations show that adding this complexity density scalar field to GR does replicate observed rotation curves of galaxies. Would love to formalize this and run some numerical simulations.
Would hope that fitting the free parameters of GR with this complexity density scalar field would yield some testable predictions that differ from current naive assumptions around dark matter and dark energy.
”External computation susbtrate” is a useful idea if it leads to falsifiable theories. As a ”theory of everything” it sucks because it’s clearly not motivated by any specific maths or observations, but by the human need to map nature into some comprehensible analogue. Ie. taking some simpler subset of nature and trying to pretend the rest of it is like that as well. Usually nature so far has become more incomprehensible the deeper we’ve looked at it.
Newtonian mechanics & mechanical clocks being hottest precision technique led scientists at the time to viewing nature as a clockwork. Now we have computers, we think ”nature is like computers” because it’s an appealing analogue.
But it’s a false analogue imo. Just like clocks are a thing enabled by nature (a subset, in every meaning of the word) similarly computers are a subset of nature. So yes, nature can think (with human brains) and nature can run computations (with cpu:s impregnated with programs) but that also is just a subset of nature.
Now: games of the mind and helpfull analogues rock. And asking ”how is nature analogous to a turing machine” is interesting for sure. But just because a game is fun or analogue appealing, should not one let forget in the philosophical sense that one is playing only with a limited subset of a thing.
Well his idea was that the laws of physics actually change depending on the distance from the galactic center. Far enough out, information can be transferred faster than the speed of light. Too close and life stops working. Which honestly seems much more fun than gravity slowing things down a bit.
Webb is turning out to be one of the most impactful pieces of scientific apparatus of the last century or so. Not that it took all the relevant data, but that it was the final thing that broke open all the doors being held shut. We're watching a Kuhnian paradigm shift in astronomy unfold in real time.
which shows as early in 1973 people knew they had no idea how supermassive black holes could possibly form. Lately these problems have intensified because Webb seems to see that all sorts of developments seemed to happen a lot more quickly than they should of which leaves one wondering if the first billion years were really the first ten billion years. Could Timescape explain that?
The Royal Astronomical Society's piece [1] on this study mentioned,
"The model suggests that a clock in the Milky Way would be about 35 per cent slower than the same one at an average position in large cosmic voids, meaning billions more years would have passed in voids."
If the gases pulled in toward a black hole were experiencing only tens-of-millions-of-years of time during a full billion years (as compared with the billion that elapsed on a clock experiencing a lower-gravity "universal average" condition of gravitation and of time), then does that mean...
that those magnetic fields and gases must emit to us any greater or smaller intensity of radiation vs. the amount that they would emit if they had been rubbing together within our own rate of time?
It's just substituting gravitational redshift for cosmological redshift; one can also substitute kinematic redshift for cosmological redshift. The redder bits are running slower, which one sees from spectral lines that are reliably generated by quantum processes. There are reasons for preferring cosmological redshift, and in particular it's because whatever is going on with the distribution and motion of galaxies, it's doing an excellent excellent imitation of a big (~70%) stable energy-density smoothly at every point.
The reverse of your question is interesting: a quasar host galaxy or a Seyfert galaxy isolated deep within a supervoid should be much less redshifted than similar cluster and filament galaxies (one would have to "weigh" the galaxies, and so would lean on angle-diameter-distance and other geometrical relations, relative redshift of hydrogen-alpha (and millimeter CO and 21cm neutral hydrogen spin-flip etc.) at the galactic limbs and in globular clusters). The side-effect of Timescapes running clocks much slower in filaments is that they run much faster in supervoids. Many known voids have small numbers of galaxies in them (the emptiest of voids have something like a tenth of the radio-generating matter compared to large galaxy clusters, so they're nothing like "empty", which in turn drives the focus to the evolution of Timescape's "void fraction") so looking for faster-running galaxies in voids is not an impossible observational test.
Going forward again, the late-time Integrated Sachs-Wolfe structure would be anisotropic, with strong nonlinear differences between ISW radiation through filaments (because of the significant gravitational backreaction central to Timescapes) and ISW radiation through voids. Coarsely, that's not what we see (nonlinear effects are subdominant) and is an obvious target of study of the Timescapes proponents.
AFAIK one possible explanation for the black hole issues could be primordial black holes, which are also a candidate for at least a component of dark matter.
Yep. There is the idea that you could get little primordial black holes (that maybe weigh as much as a mountain and could be evaporating now) and the idea that you could get huge primordial black holes. Also the occasional strange idea that the universe might be cyclic (not too fashionable but can fill the hole left by inflation) and that black holes can survive the crunch.
I love the idea. It’s one of our current physics hypotheses I hope is true, because it means the universe would be full of tiny things the size of a hydrogen atom with the mass of asteroids.
“The devil’s glitter?”
BTW they would not suck up planets and stuff like inaccurate sci-fi. One could be going real fast and fire right though the Earth and do little, maybe cause some seismic events, but we would never know unless we knew exactly what to look for. A tiny black hole would have a tiny event horizon.
If you dropped one into a planet or star with a low enough velocity that it didn’t shoot out the other side it might do a lot of damage, then come to rest in the middle and slowly grow. I recall reading that one in Earth’s core would take possibly millions of years to do much since the radiation pressure caused by accretion around it would limit the rate of matter falling into it. Earth would eventually become an Earth mass black hole but it would not happen in any human lifetime, possibly not in the lifetime of the human race.
How could the universe be cyclic? There would need to be some strange self-consistency/teleology thing forcing it to collapse each time. Generally seems doubtful.
Are black holes needed right now? If not, you can't argue off selection.
Conformal cyclic cosmology states that the end state of the universe is conformally equivalent to its beginning and that therefore any notion of scale loses its meaning at the end/beginning of a cycle.
So it doesn't collapse. It only gets larger and larger, but since the notion of distance is encoded in the metric, and since only relative distances matter, it's "as if" it restarts once there's nothing left but energy (i.e. nothing to measure distances from or to). It doesn't actually contract and return to T=0, it's more like going to T % N and starting over.
And there are theoretical gravitational artifacts that could be measured in the CMB if the theory is true.
A couple things on conformal cyclic cosmology (CCC) for smooth 3+1 dimensional spacetimes:
> it restarts once there's nothing left but energy
The "nothing left but energy" part is a requirement that matter (in the broadest sense) must be conformally invariant so that its active gravitation as a source allows for the CCC conformal rescaling of the FLRW metric, which is how a predecessor "aeon" is connected mathematically to its successor.
Of the Friedmann dusts, only lightlike radiation can be rescaled this way. Equivalently, all the matter left in the universe at the end of an aeon must be on null geodesics.
Because local Lorentz invariance is fully baked into the Standard Model of Particle Physics (SM), and because the SM has particles with nonzero rest masses (which in a Lorentzian patch cannot couple to null geodesics), NOBODY knows how to do this in a consistent way (let alone in a way that matches actual evidence from particle physics).
(If we restrict to QED we need to convert all electrons and positrons into photons and impose some unknown mechanism to suppress statmech fluctuations back to a non-conformally-invariant condition. In reverse order, two-photon physics are maaaaaaybe supressible adiabatically. However, there just aren't enough positrons to find and annihilate every electron (adiabatic expansion makes that even less likely!), and electrons on their own don't decay into photons. Of course, once we add in the weak and strong forces, we are far beyond quantum electrodynamics, with all sorts of new ways in which reaching a conformally invariant stress-energy state becomes implausible. Oh yeah, and now do dark matter.)
WRT previous comments in this thread, any black hole (BH) at the end of an "aeon" must radiate only massless bosons. This puts a pretty strong lower limit on the mass of Hawking-radiating BHs crossing an boundary between "aeons", or destroys the mathematics of the hypothesized conformal rescaling by virtue of having incompatible non-conformally-invariant field theories on both sides of the Einstein Field Equations.
Binary BHs don't fit cleanly into the FLRW rescaling picture either: among other things they source a metric that isn't locally isotropic and homogeneous (a dust of high mass isolated singleton Birkhoff-theorem BHs is mostly fine though, and in principle you could get through through BH mergers and a stronger cosmological coupling (some types of "fifth force"/quintessence, for instance, to break apart wide BH binaries/triples/multiples)).
There are maybe escapes from some of these constraints in extra dimensions and lattices, but I've been under the impression that one of the attractions of CCC is that it's compatible with the FLRW metric of the standard cosmology.
Does CCC work with small perturbations on the boundary between aeons? Who knows. However, it probably works with small perturbations near that boundary, because we do perturbative FLRW routinely these days. So maybe there's some mechanism (e.g. in dark energy) that makes small deviations from conformally invariant field configurations entirely vanish at the aeon boundary. But we have no astrophysical or particle physics evidence for that at all.
Finally, the article at the top is about inhomogeneous cosomology -- i.e., Timescapes metric is not FLRW metric -- and whatever one's position on CCC, it rests on the standard view that at large scales the universe is homogeneous and that any inhomogenities, anisotropies, and backreactions vanish at smaller scales (and so admit perturbation theory).
I haven't read the literature, but if what you've said is close to right I don't buy CCC. There's too many alternative explanations that fit the data at least as well.
Infinity is a big number though, if stat mech lets it go through it will go through given enough time.
Again, CCC is one of those "feel good" theories in my eyes. It's not better than normal heat death from a prediction standpoint, right now. The universe doesn't end for good, it restarts. Yeah right.
Edit: Is CCC on a positive cosmological constant? If so and you accept a few assumptions anything that can go through will. Hawking radiation off the event horizon will get you anything you need to annihilate through nucleation.
Edit 2: looking at your history I suppose you know that because it's relevant to Boltzmann brains. I'm confused about your reference to fluctuations 14 days ago though. Nucleation and fluctuation Boltzmann brains aren't the same thing.
Me neither (to say the least), but my comment was an attempt to fairly characterize CCC in this thread's context.
> Is CCC on a positive cosmological constant?
AFAIK it's not a requirement of Penrose's approach, but with a positive CC the aeon/future_aeon conformal boundary is spacelike, so you can have global hyperbolicity (and importantly a(t) is always positive and describable by classical EOMs, and the overall structure is just flat de Sitter with a radiation fluid).
[For clarity, here I'm the one extrapolating from the radiation fluid, and afair I'm not summarizing someone's attempt to do this rigorously.]
> it restarts
The conformal rescaling isn't exactly a restart; a Eulerian observer A crossing into its future aeon could encounter a highly similar scaled-up obsever B running much more slowly (and made from fundamental particles with much longer de Broglie wavelengths compared to A's, those having gotten there by freezing out of A's cold sparse ultra-long-wavelength massless boson gas, which to B is B's hot dense ultra-short-wavelength early conditions).
> Infinity is a big number though
Yeah, I think that's why CCC proponents are OK with past-aeon's (pseudo) heat death managing to be a rescaled future-aeon's low entropy initial conditions.
On the other hand, they seem to think in merely large-but-finite timescales when they talk about final BH evaporations leaving a lower-entropy mark on the pseudo heat death that can carry across the boundary (and rescaled to possibly sub-horizon perturbations from B's perspective) be detectable in future aeon's structure formation.
> confused
Sorry about that. Perhaps I lost some train of thought that if made explicit would have made it clearer why I might have been talking about both nucleated and fluctuated structure. I don't remember and am unlikely to re-check the context.
Black holes can survive a Big Crunch scenario? That can go a long way to explaining many things. Can you please provide a paper with more references to this, and potentially one with an example mechanism?
"Cosmological models are built on a simple, century-old idea – but new observations demand a radical rethink" (2023) < by David Wiltshire, one of the authors of this paper, aimed at non-physicists
All of these were loosening of certain criteria that opened up many possibilities. It is certainly erroneous to assume we must, by necessity, have a homogeneous cosmology.
This is the opening salvo in cosmology's Battle of Trafalgar. Dave Wiltshire has lined up a set piece 20 years in the making that is going to obliterate both lambda CDM and MOND and all the rest.
I think the most exciting thing is probably the team he's got together now, and some of the computational stuff his associates have going at the moment.
A very compelling argument that the need for dark matter may be an artifact of a in incorrect assumption about the universe; the extent to which it is homogeneous and large scale structures can be ignored in calculations
I'm surprised cosmology hasn't accounted for differences in clocks given how central GR is to astronomy. Granted I am no expert, but adding this dynamic was, until today, a bridge too far, or thought to average out somehow and not be pertinent
> cosmology hasn't accounted for differences in clocks given how central GR is to astronomy
Of course it has. Yes, LCDM's FLRW metric, by its defining assumption of spatial homogeneity, doesn't allow the metric (let alone the speed of clocks) to vary spatially. However, it is very common to do perturbation theory on top of the FLRW metric to account for density fluctuations. Besides, there are also models like LTB (Lemaître-Tolman-Bondi) which give up on homogeneity at the non-perturbative level (while still preserving isotropy, though).
All in all, the idea that local voids could explain away the Lambda in LCDM is anything but new. It's just that the OP's timescape approach is the first one that seems to produce promising results. (Disclaimer: I merely skimmed the paper.)
This is an honest question since I have seen this phenomenon occur a few times now with cosmology/astrophysics papers on HN: How did the original poster find this? And why has it gotten such interest/points? I sincerely hope it is simply a well-intentioned interest in our universe (which it greatly heartens me to see!) combined with naïveté (not meant pejoratively, just to refer to lacking context) wrt the technical nature of this work, but I am interested to hear your thoughts.