A Response to Counter Apologist

Difficulty Level: Advanced

This post gets a bit technical at points. Be prepared to encounter terms and concepts you might not have seen before.

Context

Counter Apologist (CA), a contributor to the Real Atheology Facebook Page, responded to a post of mine on Facebook. For the full context, here’s what my original post said:

Counter Apologist responded to this post at the link below:

https://counterapologist.blogspot.com/2017/12/are-our-standards-too-high-for.html

The following is my response to the above link.

General Remarks

In reading his post carefully, there’s very little I can agree with. Most of what he says strikes me as obviously false. For instance, his first critique says this (emphasis added):

Like all good apologetics, some of what Cameron says here is quite sensible: Even if it’s possible that doesn’t mean we have enough justification to think it’s probable or likely.

However, this doesn’t support his idea that just because there are some answers to the Kalam or Fine Tuning arguments that rely on “it’s possible that X premise is false” that doesn’t mean that there are not better, stronger objections – or that the appeals to possibility in that context is analogous to believing far reaching possibilities in mundane situations.

He starts off by attributing an idea to me I did not articulate. It’s not “my idea that” there are not better objections to the Kalam or Fine-Tuning arguments. The point I was making was that some people confuse possible alternatives with probable alternatives. That isn’t to say that all people do this or that there aren’t probable alternatives. I wasn’t making that claim in this Facebook post.

The remainder of the article goes on to provide supposed counterexamples to the claim he ascribes to me. Now, I agree that at least some of his arguments would constitute counterexamples if I were making that claim. But I wasn’t. Spending an entire post responding to claims I did not make is a textbook case of the strawman fallacy.

This is why I am a huge proponent of asking for clarification. You’ll often see me ask, “What do you mean by that?” This can be frustrating for some people–and I get that–but it’s an honest attempt at understanding the actual claims of my interlocutor. I don’t want to respond to a claim they aren’t making.

That said, let’s analyze his alleged counterexamples in depth, as I’m convinced they are all false.

The Kalam

Next he offers two critiques of the Kalam Cosmological Argument: (a) Science gives strong evidence that A-theory is false, and (b) Science doesn’t tell us the universe began to exist. These, he claims, are good reasons to think the Kalam is unsound. Now, I could argue that both claims are dubious (which they are), but instead of arguing that, I’ll take a much simpler approach. I will argue that even if both claims are true, it doesn’t follow that the Kalam is unsound.

Regarding (a), assuming he’s right that science gives strong evidence against A-theory (which again is dubious), I agree with Calum Miller and others that one could simply forward an A-theory-independent version of the Kalam (e.g.: “one could simply take as one’s datum a universe with an ‘early’ temporal boundary”). That’s easy. Regarding (b), assuming he’s right that science doesn’t tell us the universe began (which again is dubious), Craig is explicit that the evidence from science merely confirms what his two philosophical arguments already establish re: the beginning of the universe. So even if the science is out on the beginning of the universe (which again is dubious), he’s only dealt with a third of Craig’s arguments in defense of the finitude of the past.

So, even if (a) and (b) are true, it doesn’t follow that the Kalam (or something very similar) is unsound. Much more work is needed to reach that conclusion.

The Fine-Tuning Argument

He then turns to the Fine-Tuning Argument (FTA). What’s ironic is that he falls into the trap I warned against in the paragraph he critiques. At least two of his objections to the FTA are based on possibilities rather than probabilities.

First, CA says that (emphasis added):

It could be that all the constants referred to in the FTA actually are necessary – ie. they must have the values they possess because there is a grand unified Theory of Everything (ToE) which explains why those constants have the values they do.

Notice that he says it could be the case there is a Theory of Everything that explains fine-tuning. This is an explicit appeal to possibility, not to probability. And that’s just the beginning of what’s wrong with this statement. To quote Luke Barnes, “A ToE won’t explain literally everything. In particular, the initial conditions (or, more generally, boundary conditions) of the universe are a worry.” In other words, a Theory of Everything won’t explain something like the initial distribution of mass-energy and hence won’t explain the most impressive case of fine-tuning. Moreover, a ToE would not explain why our set of laws of nature exist rather than some other set (e.g.: the set that includes a universal attractive force (like gravity), a force that binds protons and neutrons together in the nucleus (like the strong nuclear force), something like the electromagnetic force, and so on). So not only is he appealing to an unmotivated possibility, he’s appealing to a possibility that doesn’t do what it needs to.

Secondly, he says that (emphasis added):

“Life” isn’t well defined, so we have no idea what other kinds of life could arise given other values of the constants we find in physics.

Here again he’s not arguing that other kinds of life are probable given different values of the constants, rather he’s saying for all we know life could arise under different conditions. In other words, he’s making an appeal to an unmotivated possibility. I suppose it’s only appropriate to let CA rebut himself; earlier on in the article he agreed: “Even if it’s possible that doesn’t mean we have enough justification to think it’s probable or likely.”

CA then says:

If we look at the universe as a whole we wouldn’t assume that it was finely tuned for life, given that the overwhelming majority of it is exceptionally hostile to life.

This objection is easily dispelled. Let’s start by defining fine-tuning. According to Barnes, fine-tuning is the claim that, “In the set of possible physics, the subset that permit the evolution of life is very small.” That definition is compatible with the overwhelming majority of our universe being inhospitable to life. Consider the following argument:

(1) Most of our universe is hostile to life.

(2) If most of our universe is hostile to life, we shouldn’t assume that, in the set of possible physics, the subset that permit the evolution of life is very small.

(3) Therefore, we shouldn’t assume that, in the set of possible physics, the subset that permit the evolution of life is very small.

When put in the form of a syllogism, it’s obvious that the consequent of (2) doesn’t follow from the antecedent. These are unrelated propositions. Thus, the idea that we ought to conclude that our universe is not fine-tuned for life given that our universe contains very little life is, to put it gently, wrong-headed [1].

Normalization

The last argument CA forwards against the Fine-Tuning Argument is the Normalization Problem (NP). Here’s what he says:

Finally, we can point out that there is a technical problem with appealing to probability in the FTA. This is what is known as the normalizability objection. The main problem is that to say that the values various cosmological constants could take fall within a specific probability range can’t be coherently defined – because the argument presumes that there is no upward bound on the possible values that those constants could take. This means there are an infinite number possible values that those constants could take, but if that’s the case the total probability space can not be summed up to be 1. This is required to be able to make coherent use of probability math.

Let me respond to this with three points. First, this is a well-known problem, and one that Robin Collins explicitly addresses in his work on fine-tuning. His response involves limiting the comparison range. Here’s what he says, “My proposal is that the primary comparison range is the set of values for which we can make determinations of whether the values are life-permitting or not. I will call this range the “epistemically illuminated” (EI) range.” Given that the EI range is finite, any fine-tuning argument that utilizes it is not subject to the Normalization problem [2]. For anyone that’s interested in learning more on this, check out Collins’ paper in the Blackwell Companion.

The second point is that Barnes has recently published a paper on the subject as well (click here). There he argues that the NP is a problem not just for fine-tuning but for physics more generally. Any theory that has a free parameter (e.g.: Newton’s theory of gravity and it’s free parameter G) will run into the NP. Here is his response: “The standard models of particle physics and cosmology avoid these problems as follows. Dimensional parameters do not vary over an infinite range; they are bounded by the Planck scale. Dimensionless parameters might not vary over an infinite range, and common practice in the physical sciences assumes that parameters of order unity are more probable, so a uniform probability distribution is not forced upon us by the principle of indifference.”

Third, I agree with Alvin Plantinga (surprise, surprise) that the NP proves too much. If successful, it would undermine some of the most obvious design inferences (see section 2.2 of this paper).

The Normalization Problem is a technical puzzle, no doubt, but it is not one that seriously challenges the Fine-Tuning Argument.

Conclusions

Fortunately, I can agree with at least part of what CA concludes at the end of his post. He says:

While Cameron is correct to say that just because something is a logically possible answer to an argument, that doesn’t give us justification for believing that possibility. A good example of this is the Problem of Evil – just because there is a logically possible way out of the problem doesn’t give any justification to think such a solution is remotely likely.

Let me clarify the dialectical situation. There are at least two versions of the Problem of Evil: The Logical Version and the Evidential (or probabilistic) Version. The logical version says that God and evil are logically incompatible; there is no possible world where (i) God exists and (ii) Evil exists. By contrast, the probabilistic version says that, given all the evil and suffering in the world, God’s existence is highly unlikely. The former is a claim of impossibility, while the latter is a claim of improbability.

Here’s the point: To rebut a claim of impossibility, all one has to do is show possibility (for example, show is that there is a possible world where (i) and (ii) are true). That’s it. So at least in the case of the logical version of the problem of evil, possibility is all that’s required.

Where I agree with CA is that possibility is not sufficient as a response to evidential or probabilistic versions of the problem of evil. There we require something more robust than possibility. For my approach to the probabilistic problem of evil, see this blog post or, for a more rigorous treatment, my debate with Justin Schieber.

In summary, Counter Apologist’s entire blog post is a response to a claim that I never actually made. And on top of that, nearly everything he says is false. If I had more time, I would deconstruct his claims even further, but alas, it’s time to get back to work.


Notes:

[1] For a further defense of this, consider Barnes once again: “A precise definition of life is not required, for the following reason. We would like to be able to place firm boundaries in parameter space between possible universes that would develop and support life and those that would not. However, this is not practically possible, as we do not know the sufficient conditions for abiogenesis. What we can do is consider a conservative outer boundary associated with sufficient conditions for lifelessness. For example, if the cosmological constant were negative, and its absolute value 1090 times smaller than the Planck scale (rather than 10120 in our universe), space would recollapse into a big crunch in one minute. This, it seems, is a sufficient condition for a lifeless, physical-observer-less universe.”

[2] Also note that his recent work on fine-tuning for discoverability is not subject to the Normalization Problem.

If you enjoyed reading this blog post and would like to see more like it, consider donating to Capturing Christianity.
Total
91
Shares

SUBSCRIBE. BE AWESOME.

Get updates on new posts, upcoming live discussions, and more.

35 comments

  1. Cameron

    It is always best to take the pivotal argument of an opposing view and analyse it…

    For example, Counterapologists entire “blurb” rests on this statement of his:

    “Modern science gives us strong evidence that the A-Theory of time is false.”

    Well, no… Counterapologist is not correct…. simply refer him to this:

    http://www.cam.ac.uk/research/news/researchers-chart-the-secret-movement-of-quantum-particles

    Clearly, one requires expertise in physics to understand fully what this measurement means…

    However, I believe even Counterapologist would have to surrender his view regarding A-time with a quick view of the news… and if he does that…. the rest of his blog post collapses…

  2. I read Barnes’ paper on normalizability over the summer, and I have mixed feelings about it. He’s not really saying that the Plank scale is the range of *possible* values. Instead, he’s saying that its the range of possible values that we can use our current theories to use to explore what the universe would look like under those values. It’s not that other values are not possible, its just that we don’t have any idea what would happen if the constants took those values.

    Barnes’ answer to the “Normalizability Objection” is very similar to Robin Collins’ answer: identify an epistemically illuminated finite range of values that can give us a well defined probability, and hope that this epistemically illuminated region is representative of all possible universes. Collins’ himself appears to be somewhat hesitant about this method. He writes, “because we are considering only one reference class of possible law structures… it is unclear how much weight to attach to the values of epistemic probabilities one obtains using this reference class. Hence, one cannot simply map the epistemic probabilities obtained in this way onto the degrees of belief we should have at the end of the day.” (Robin Collins, The Blackwell Companion to Natural Theology, p.240-41).

    Collins is very aware that his solution will only work if the finite comparison range is representative of the range of all possibilities. Collins writes, “we must assess the confidence we have in this probability [the probability of a life permitting universe given naturalism] based on the various components that went into the calculation, such as the representative nature of the reference class.” (p.241). But then Collins’ goes on to say ” there is no completely objective procedure for addressing” if the finite comparison range is representative (p.241). Luke Barnes gives a quick argument for thinking that the epistemically illuminated range is representative of all possible ranges in his book “A Fortunate Universe” but I’m not convinced.

    For those interested, Tim and Lydia McGrew have written a response to Collins here: Timothy and Lydia McGrew, “A Response to Robin Collins and Alexander R. Pruss,” Philosophia Christi 7 (2005): 425–43. They defend the Normalizability Objection against Collins’ and Barnes’ approach of using an epistemically illuminated range to limit the range of possible values.

    1. Hi Ron, thanks for the comment! What struck me in Barnes’ paper was the fact that the Normalization Problem is not just a problem for fine-tuning but a problem for physics more generally. This works in tandem with Plantinga’s objection (we essentially have two independent reasons to Moorean shift the argument). And then keep in mind that Collins’ current work on discoverability avoids normalization issues altogether. So this objection, even if successful, won’t do away with all of fine-tuning.

      1. I’m not sure I understand Plantinga’s “counterexample” involving messages in the sky. I think his argument is that in order for a message to be visible, it would need to be a certain length. And because the range of possible lengths is infinite, if we assume that a uniform distribution applies to objects that randomly form in the sky, then we cannot possibly attach a probability to P(message is visible|it formed randomly). But it seems to me that the obvious response is to simply say that we should not assume that a uniform probability distribution is applicable here. Objects in the sky do not form purely randomly. Laws govern the “natural” formation of celestial objects, and certain objects of certain sizes and certain shapes are more likely to form than other obects/sizes/shapes. So given what we know about the laws of physics and how celestial bodies like planets are formed, we can say that formations of objects into sentences are less likely to randomly come about than a celestial object like a planet or an asteroid. And this is because the Principle of Indifference does not apply to naturally occurring celestial objects, because there is a mechanism that favors certain types of naturally produced celestial objects over others. However, on naturalism, there is no mechanism that favors types of laws of physics themselves over other types of laws of physics, so the Principle of Indifference does apply to the values of the constants (on naturalism). So there is a serious disanalogy between Plantinga’s story and the Fine Tuning Argument.

        Or I’ve misunderstood Plantinga.

        1. The normalisation problem is NOT a problem.

          When one analyses the fundamental physics constants that have been experimentally measured. What one finds is that they form a linear log distribution according to a 1/x probability function.

          This probability function in itself constrains the range in values.

          Recourse to Bayesian analysis is not necessary, instead one just uses a chi-square test to determine the validity of the hypothesis.

          Interestingly, the Bible itself (if you analyse it closely and know how) does in fact come up with the correct range of the values that matches the fundamental physics constant range we find.

          1. I should add (just in case you are wondering) concerning the Biblical comment above…. is that one can show that the Bible satisfies Craig’s Theorem (it is unfortunate that W L Craig will come to your mind; the theorem has nothing to do with that Craig).

            I am giving a lecture on this topic as a rejoinder to Prof Keith Ward’s lecture to the same audience in April.

          2. The Bible predicts the range of possible values for the physical constants? Please explain and cite passages. Thanks

        2. Ron

          When you write:

          “Objects in the sky do not form purely randomly. Laws govern the “natural” formation of celestial objects, and certain objects of certain sizes and certain shapes are more likely to form than other obects/sizes/shapes”

          You are referring to the Nebular hypothesis, right?

          Fact is, this “hypothesis” was made by Kant…. Laplace, unfortunately did not look at the maths of the hypothesis (though he was more than capable of it)… Laplace simply accepted Kant’s hypothesis carte blanche (like most people) on the authority of Kant. If Laplace had, I doubt the hypothesis would have accepted it.

          Fact is, the Nebular hypothesis is very, very dodgey…. there is no cosmological evidence to back it up…. in fact, there is much cosmological evidence to discount it!

          So, I would be wary of the continuation of your statement:

          ” So given what we know about the laws of physics and how celestial bodies like planets are formed, we can say that formations of objects into sentences are less likely to randomly come about than a celestial object like a planet or an asteroid.”

          1. Regardless of *how* planets are formed, there is rather obvious that the natural tendency of objects in space is to form into celestial objects *like* planets. Objects like these form far more frequently than objects that look like letters in the English language, so my point is totally unaffected. The principle of indifference does not apply to naturally occurring objects in space. Some objects/shapes/sizes are clearly more frequent and prevalent than others

        3. Ron

          Plantinga’s story and the Fine Tuning Argument does work.

          For example, suppose one writes a story (or rather a plan for a story) in such a way that at each 300 places there is an option of performing two different actions.

          Each of the stories generated by the outline might be only two or three pages long… yet the total number of stories generated by the outline would be infinite…

          So, it is the particular “message” that is “designed” (fine-tuned) that is the result that is important.

          This is why the Bible (i.e. stories that have been written over centuries) that satisfies Craig’s Theorem (that has been written over centuries) is itself “Fine Tuned” into an axiomatic system from a possibly infinite number of “stories”.

          Therefore, the resemblance of the Bible to the Fine-Tuning Argument is similar.

        4. Ron

          Sorry, but this format is not manageable for exposition, suffice to say that the Bible identifies the range of fundamental physics constants being found within this range: 10^(-31) -> 10^(31) …. which is the range that they have been experimentally found.

          Concerning, your comment:

          “.Some objects/shapes/sizes are clearly more frequent and prevalent than others”

          Again, similar to my previous post, sizes of objects vary according to a log power law…. this mean small object sizes are more likely than large object sizes. In this case the power law is continuous.

          Which is interesting because size of words also follow a log power law… this means small words are more likely than large words. In this case the power law is discrete.

          In many respects, when reading Plantinga… the important bit is to understand his connections…. and here I would say that what Plantinga has at the back of his mind is the Mazzeroth.

          1. “suffice to say that the Bible identifies the range of fundamental physics constants being found within this range: 10^(-31) -> 10^(31) …. which is the range that they have been experimentally found.“

            You’ll have to excuse my incredulity. It won’t suffice to just say that. If true, that would be a pretty incredible biblical prediction, so I’d really like to see where is says that.

          2. Ron

            Just wondering… you are incredulous concerning the constant ranges BUT NOT the Craig Theorem?

            I would have thought that your main issue would be the Craig Theorem on account that the constants simply follow from that….

          3. Phillip,

            I’m as curious as Ron on this one: where does the Bible “accurately” predict the range of fundamental constants? I’m also sure you’re aware that 10^(-31) -> 10^(31) [sans units…] is an astounding range and even for back-of-the-envelope ancient math, does this not seem almost hilarious to claim outright as ‘accurate’ to any meaningful degree? I predict that, once my son is able to drive, he will drive between 10^(-31) -> 10^(31) miles in his lifetime. Genius? Divine? Where are the droves of apologists and religious historians saying “the Bible got it right before Hubble?” Pardon my incredulity as well…

          4. Alex

            Regarding “sans units” the result holds regardless of the set of units assumed for the physics constant numbers.

            Hence, the result is not a “fluke”. You see, the fact that a 1/x law holds independent of the choices of units is that it is a truly physical effect.

            The analogy to your son’s mileage is not appropriate; because the constants are a population of mixed phenomena (apples, oranges, meters, grams, etc.).

            As mentioned previously, this format is not appropriate for an exposition that would do the research justice.

            Because, essentially the work is new research… the fact that apologists are not aware of these things is simply because they do not possess the tools necessary… in the main they rely on Philosophy… I do not, my expertise is in condensed matter physics, information-physics and ancient languages…

            Simply email me a request for the lecture (as I posted to Ron) and I’ll be happy to send you the lecture.

          5. Though I should add Alex….

            That if you considered each individual trip your son drove over the course of his life, one would find that his “mileage” for each trip would follow a 1/x power law…. again, this would be a physical effect, i.e. short mileage trips are more likely than long mileage trips.

  3. Cameron,

    Can you clarify how the scientific evidence “confirms” the conclusions of the philosophical arguments for the finitude of the past. “Confirms” usually means “raises the probability.” But Craig’s philosophical arguments aims to show on a priori grounds that it is necessarily true that the past is finite – i.e., P(finite past)=1. By their very nature, priori necessary truths cannot be “confirmed” by empirical evidence because their probability is almost maxed out.

    However, taking an epistemic interpretation of probability allows you to attach probabilities below 1 to necessary truths. So if you are unsure that Craig’s philosophical arguments are successful (let’s say you’re 50/50 on the issue) then scientific evidence could close the gap. Properly speaking, the scientific evidence (E) we have is E=our universe began with an expansion. Craig needs to say that E confirms “B=the entirety of all physical reality had a beginning.” To do this, it must be true that P(E|B)>P(E|~B). Let’s say P(E|B) is 1, because B would entail E. But is P(E|~B) really significantly lower? ~B is a catch-all hypothesis comprised of every theory in which the universe didn’t have a beginning. There are an infinite number of conceivable eternal models, and scientists propose them as they come up with them, but since we don’t really know all the theories that make up ~B, there’s no way of predicting how probable E is given ~B. Instead, we have to just do our best by estimating how probable E is on *specific sub-hypotheses* within ~B. Scientists have proposed various versions of ~B that actually entail E, so P(E|B)=P(E|specific ~B theory), so E doesn’t confirm B over these theories. Now, Craig may argue that the prior probabilities of these specific ~B theories are low, so even though they predict E, it is still the case that P(E|~B) is low. However, this is precisely the point that Counter Apologist was disputing. He was arguing that these specific ~B models are live possibilities, not just mere logical possibilities that comprise a negligible portion of ~B’s probability space. Craig really needs to argue that the ~B models that predict E have such low priors that they don’t take up much of ~B’s probability space, and therefore P(E|~B) is still low. So if Counter Apologist is correct that these ~B models are more than just bare logical possibilities, but are in fact respectable scientific hypotheses, then they will take up a decent portion of ~B’s probability space, and there will not be a tremendous difference between P(E|B) and P(E|~B). Perhaps E is some evidence for B over ~B, but I think Counter Apologist would say that P(E|~B) is somewhat inscrutable at this point, so E isn’t particularly compelling evidence.

    1. Ron

      This statement of yours can be clarified:

      ” There are an infinite number of conceivable eternal models, and scientists propose them as they come up with them, but since we don’t really know all the theories that make up ~B, there’s no way of predicting how probable E is given ~B.”

      There is only one way of “predicting how probable E is given ~B” and that is by considering what all these types of eternal universe models share, i.e. the a priori assumptions of all theses models, which are:

      1/ Eternal inflation
      2/ Many-World interpretation of quantum physics
      3/ Multi-verse

      However, even given these a priori’s this universe still would have had to have a “beginning”…. because in our universe the micro-world is finite and space-time is finite.

      The main problem all eternal universe theories have to overcome is the role of a quantum measurement, i.e. the theory is constrained by the assumption that quantum measurements are not the result of observation (quantum measurements instead “evolve”); this is why they all must also assume the Many-World Interpretation of QM.

    2. Regarding CounterApologist….. in his blurb he has a BIG problem…. because in it his position is that “time” is NOT a real variable….

      Problem is…. a Multi-verse requires “time” to BE a real physical variable… so, his reasoning is inconsistent…

      1. Can you explain why this is the case? Many (perhaps most?) cosmologists who promote multiverse theories are B-Theorists about time.

        1. I’ll use Vilenkin’s analogy, he says:

          “What causes the universe to pop out of nothing? No cause is needed. If you have a radioactive atom, it will decay, and quantum mechanics gives the decay probability in a given interval of time, say, a minute. There is no reason why the atom decayed at this particular moment and not another. The process is completely random. No cause is needed for the quantum creation of the universe.”

          The important bit in the quote is: “the decay probability in a given interval of time, say, a minute.”

          So, what Vilenkin is saying is that we have a probabilistic fluctuation in the time direction bounded by certain numbers t1 and t2.

          The difference t2-t1 represents a finite total time interval for detection of the particle, i.e. in Vilenkin’s case it is 60 seconds.

          Essentially, we have a probability density function that defines the probability that the particle is measured somewhere within the measurement space at the time (t,t+dt).

          So, a small (t,t+dt) has a low probability of being measured and a high (t,t+dt) has a high probability of being measured.

          Which essentially is saying that in the former case we have a certainty of the not existing (t=0, dt=0) but in the latter case we have a certainty of existing.

          Regarding a quantum fluctuation we can’t have the latter case unless time all ready exists.

          Time and quantum fluctuation are NOT mutually exclusive.

          Also “observe” what Vilenkin’s comment amounts to:

          1. He admits that time must be a real physical parameter in the measurement.
          2/ He admits that the measurement IS observed.

          1. This is all above my pay grade. However, I know that Vilenkin agrees with CounterApologist regarding the B-Theory of time.

          2. Interesting…. is he really?

            Must admit it doesn’t surprise me… on account the problem with Guth’s inflation idea (though in fact, it is Biblical)… is that it suffered from the problem of:

            “What caused inflation to stop in our universe?”

            Vilenkin’s idea was to side-step this bug-bear of inflation theory… and to simply say:

            “Inflation doesn’t stop; it continues creating universes”

            Though it doesn’t answer the question as to why inflation stopped in our universe.

            Awfully weird… that Vilenkin does not belive in time… but, posits his quote.

          3. Are you equating “believing the B-Theory of time” with “Not believing in time”? B-Theorist would say they believe in time, they would just say that time is a dimension in a 4D spacetime block and that it does not have the characteristic of “flowing.” Or what you say that someone thinks time is “not real” do you mean that they think it is emergent at the macro level rather than a fundamental feature of reality at the quantum level? I know that Vilenkin is a B-Theorist, but I don’t know whether he thinks time is fundamental or not

          4. As I stated Ron….

            1. He admits that time must be a real physical parameter in the measurement.

            I mean, isn’t this quite clear in his short exposition that he is treating time as fundamental on how the universe comes about….

            How would you interpret his:

            “What causes the universe to pop out of nothing? No cause is needed. If you have a radioactive atom, it will decay, and quantum mechanics gives the decay probability in a given interval of time, say, a minute. There is no reason why the atom decayed at this particular moment and not another. The process is completely random. No cause is needed for the quantum creation of the universe.”

            With respect to time?

            Obviously, there is a lot of physics involved in this quote of his… because quite clearly he is also stating the universe does have a “BEGINNING”, i.e. he states:

            “quantum creation of the universe”

            His use of the word “random” however is quite odd… on account “randomness” is very difficult to determine (if not impossible)… randomness in an information sense is non-order, i.e. chaos…

            So, this part of his quote: “The process is completely random”

            Is equivalent to: “The process is chaotic”

            So, it would appear (at least from my reading of his quote) that he is in a bit of a muddle….because, clearly he is treating time as a real physical quantity…

          5. Ron

            Just to be clear… and trying to avoid jargon…. in information-physics time and space are treated random fluctuations… but, they are both treated as real and fundamental….so, using your description in information-physics both time and space do not “flow”…. they are “probabilistic”…

            I could go into more detail but…

            So, I would say that…. yes, Vilenkin considers time to be a real physical variable. He is saying as much in his quote.

            Again, this idea is also found Biblically…

        2. Also note, that in my first post on this topic… I linked to a Cambridge University paper…

          They use information-physics to get the result…. and in information-physics, time is treated a real physical variable…(otherwise they would not be able to see the particle interact with its environment in its past… mind you, they could only do this when the measurement is completed… not before hand).

          Not many cosmologists (if any) are acquainted with information-physics techniques….

        3. Ron

          Just to be clear… and trying to avoid jargon…. in information-physics time and space are treated random fluctuations… but, they are both treated as real and fundamental….so, using your description in information-physics both time and space do not “flow”…. they are “probabilistic”…

          I could go into more detail but…

          So, I would say that…. yes, Vilenkin considers time to be a real physical variable. He is saying as much in his quote.

          Again, this idea is also found Biblically…

        4. Ron

          The thing is…. the Kalam Argument (should that be your target) is a philosophical argument…

          The Kalam is NOT a Biblical Argument…

          Because the Bible is quite emphatic concerning the creation of the universe…. the past did not exist, nor did the present…BUT the future did exist…

          So, the creation of the universe was a case of backward causation.

          This means that the “future” acted as the boundary condition at the moment of creation.

      2. Ron

        The thing is…. the Kalam Argument (should that be your target) is a philosophical argument…

        The Kalam is NOT a Biblical Argument…

        Because the Bible is quite emphatic concerning the creation of the universe…. the past did not exist, nor did the present…BUT the future did exist…

        So, the creation of the universe was a case of backward causation.

        This means that the “future” acted as the boundary condition at the moment of creation.

  4. The problem with the Kalam with respect to A-theory and B-theory is in its interpretation.

    A-theory of time is strictly a geometrical global physical argument. (where Cantor axions apply)
    B-theory of time is strictly a mathematical local physical argument. (where Dedekind axioms apply)

    The confusion in the interpretation of the Kalam using A-theory of time, is just that; because it is clear from Cantor that no last-transfinite number exists. The confusion arises in the Kalam when one applies it to “time”, i.e. the “flow of time”. Debates arise because someone like W L Craig attempts to make a case of the idea of an “aggregate of all aggregates”, i.e. a Cardinal number of the last transfinite, so extreme confusion arises and all arguments lead nowhere.

    The mistake is that the Kalam has nothing whatsoever to do with “time”; should one apply it to “time” the axioms of Cantor negate it absolutely.

    However, if instead one uses “information” as the primitive variable in the Kalm; then the Kalam makes perfect sense (I mean “time” isn’t even mentioned in the Kalam Argument; this should be a hint). This means arguments concerning the Kalam and A-theory are plain non-sense.

    This is because global information does in fact “flow from one system to another” and essentially this is what the Kalam is saying. The Kalam has nothing to do with time!

    This means that both the Kalam Argument of Information (which concerns global flow of information from one system to another) and B-theory of time (which is the local discontinuity of time) both apply to Vilenkins analogy:

    “What causes the universe to pop out of nothing? No cause is needed. If you have a radioactive atom, it will decay, and quantum mechanics gives the decay probability in a given interval of time, say, a minute. There is no reason why the atom decayed at this particular moment and not another. The process is completely random. No cause is needed for the quantum creation of the universe.”

    This means that the crux of any debate is this statement of Vilenkin:

    “No cause is needed for the quantum creation of the universe.”

    The problem with this statement if one looks at from an Heisenberg world-picture quantum view is that though the universal wave function does not have a time variable, i.e nothing happens any interaction with this universal wave function does trigger “becoming”, i.e. flow of information, discontinuous time.

    Which essentially is what the Bible states (Job 9:8):

    “He alone stretches out the heavens and treads on the waves of the sea.”

  5. You even have a reference to “inflation” in the Job verse, i.e. stretches out the heavens, and “collapse” of the universal wave function, i.e. treads on the waves of the sea.

  6. Cameron….

    How come you won’t publish my posts on this topic any longer?

    They do conform to your ideal type of discussion concerning Christianity, surely….

  7. A bullet-proof re-jig of the Kalām Cosmological Argument would be this:

    1. Whatever begins to exist has an insufficient cause.

    2. The universe began to exist.

    3. Therefore, the universe has an insufficient cause.

    Now, apply this to Vilenkin’s analogy with respect to the radioactive atom:

    “What causes the universe to pop out of nothing? No cause is needed. If you have a radioactive atom, it will decay, and quantum mechanics gives the decay probability in a given interval of time, say, a minute. There is no reason why the atom decayed at this particular moment and not another. The process is completely random. No cause is needed for the quantum creation of the universe.”

    What exactly is the “cause” of the measured “decay”?

    … it isn’t time, it isn’t space…

    It is the fact that the meter (one that the physicist understands it’s operation) measured the “decay” of the system. (a counterpart would be the adage that a well-posed mathematical problem or question contains the seeds of its solution)

    It was the observed physical measurement that begat the decay phenomenon; otherwise the decay would never have physically existed. (this is the “reason” the decay existed; it was observed)

    Observation creates insufficient cause.

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*