Religion/Spirituality The Intelligent Design/God/Theism Thread

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
21] Evolution and artificial intelligence research have proved that there is no such thing as the “free will” that IDers attribute to designers; and, there is a scientifically respectable form of “free will” that is fully compatible with determinism
“Free will”: A property of conscious intelligent beings (e.g.. as we ourselves exemplify) which denies any rigid connection between input and output of information into and from the consciousness, and which is characterized by some form of “causal intervention” of the subject (the I) on the output of consciousness, both objectively observable and subjectively perceived by the intervening I.

Free will as just defined does not mean absolute freedom: the influences present in the input, in the context, and in the existing mind with all its inertial factors and structures, are certainly real. But they are not sufficient to explain or determine the output. In other words, the actions of the I are vastly influenced by outer and inner factors, but never completely determined.

Moreover, free will is in no way strictly linked to the objective results of action: once the action is outputted by consciousness, it can be modified by any external factor independent on the agent. That does not change the fact that free will has been exercised in outputting the action. Thus, the claimed compatibilist account – roughly: one may be subjectively free from imposed constraints but objectively, one’s cognitive, verbal, kinesthetic and social behaviors are predetermined by various factors – fails.

In other words, while the agent is always heavily influenced and limited by external real
ity, free will is a constant inner space of freedom which can always express itself, in greater or smaller ways, in the “black box” between cognition and action.
Or, as the current status of AI research and scientific studies on origins shows, our reasoning, deciding and acting are not simply and wholly reducible to deterministic forces creating and acting on or constraining matter and energy – most notably, in our brain tissues. Nor does it simply “emerge” from sufficiently sophisticated software. So, if that is being claimed, it needs to be shown, not merely asserted or assumed then backed up with just-so origins stories. (And, on long observations, that responsible demonstration is precisely what is as a rule not brought forth when the issue of free will comes up in discussions. In more direct terms: please, do not beg the question.)

Furthermore, free will is inwardly and intuitively connected to the concept of responsibility.

Indeed, no concept of intellectual, decision-making and moral responsibility could even exist without our intuitive certainty of free will in ourselves and (inferentially) in others. But there is no easy way to define responsibility in an universal way. As free will is essentially a very intimate property of consciousness, so also responsibility is very intimate and mysterious,
although for social necessities it is often, and rightfully, stated in a set of outer rules.
To sum up, free will is an intimate property of consciousness: the intuition of a perceiving I and of an acting I within ourselves are the double real basis of any representation we have of ourselves and of the external world. But free will is also objectively observable, and is the source of all creativity and choice in human behavior. Thus, it is an empirically anchored principle of action exhibited by known intelligent agents, and so it properly takes its place in a theory that addresses reliable identification of signs of such intelligent action.
 

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
22] Who Designed the Designer?
Intelligent design theory seeks only to determine whether or not an object was designed. Since it studies only the empirically evident effects of design, it cannot directly detect the identity of the designer; much less, can it detect the identity of the “designer’s designer.” Science, per se, can only discern the evidence-based implication that a designer was once present.

Moreover, according to the principles of natural theology, the designer of the universe, in principle, does not need another designer at all. If the designer could need a designer, then so could the designer’s designer, and so on. From the time of Aristotle till the present, philosophers and theologians have pointed out that what needs a causal explanation is that which begins to exist. So, they have concludes that such a series of causal chains cannot go on indefinitely. According to the principle of “infinite regress,” all such chains must end with and/or be grounded on a “causeless cause,” a self-existent being that has no need for a cause and depends on nothing except itself. (Indeed, before the general acceptance of the Big Bang theory, materialists commonly thought that the logically implied self-existing, necessary being was the observed universe. But now, we have good reason to think that it came into existence – is thus a contingent being — and so must have a cause itself.) To ask, therefore, “who designed the designer,” is to ask a frivolous question. Typically, radical Darwinists raise the issue because, as believers in a materialistic, mechanistic universe, they assume that all effects must be generated by causes exactly like themselves. This leads to a follow-up objection . .

Ultimately, there can really be only one final cause of the cosmos.
.
 

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
23] The Designer Must be Complex and Thus Could Never Have Existed
This is, strictly speaking, a philosophical rather than a scientific argument, and its main thrust is at theists. So, here is a possible theistic answer from one of our comment threads:

“[M]any materialists seem to think (Dawkins included) that a hypothetical divine designer should by definition be complex. That’s not true, or at least it’s not true for most concepts of God which have been entertained for centuries by most thinkers and philosophers. God, in the measure that He is thought as an explanation of complexity, is usually conceived as simple. That concept is inherent in the important notion of transcendence. A transcendent cause is a simple fundamental reality which can explain the phenomenal complexity we observe in reality. So, Darwinists are perfectly free not to believe God exists, but I cannot understand why they have to argue that, if God exists, He must be complex. If God exists, He is simple, He is transcendent, He is not the sum of parts, He is rather the creator of parts, of complexity, of external reality. So, if God exists, and He is the designer of reality, there is a very simple explanation for the designed complexity we observe.” [HT: GPuccio]

Broadening that a bit, we are designers, we are plainly complex in one sense, but also we experience ourselves as just that: selves, i.e. essentially and indivisibly simple wholes. Thus, complex but also simple designers can and do exist. The objection therefore begs the question of first needing to demonstrate that the complexity in human designers is the condition required to allow the design process.

It also fails to see that we also experience ourselves as having indivisible — thus inescapably simple — individual identities, and that such a property could well be necessary for the design process. So, it begs the question a second time.
 

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
24] Bad Design Means No Design
This argument assumes an infallible knowledge of the design process.

Some, for example, point to the cruelty in nature, arguing that no self respecting designer would set things up that way. But that need not be the case. It may well be that the designer chose to create an “optimum design” or a “robust and adaptable design” rather than a “perfect design.” Perhaps some animals or creatures behave exactly the way they do to enhance the ecology in ways that we don’t know about. Perhaps the “apparent” destructive behavior of some animals provides other animals with an advantage in order to maintain balance in nature or even to change the proportions of the animal population.
]
Under such circumstances, the “bad design” argument is not an argument against design at all. It is a premature — and, at times, a presumptuous — judgment on the sensibilities of the designer. Coming from theistic evolutionists, who claim to be “devout” Christians, this objection is therefore especially problematic. For, as believers within the Judeo-Christian tradition they are committed to the doctrine of original sin, through which our first parents disobeyed God and compromised the harmonious relationship between God and man. Accordingly, this break between the creator and the creature affected the relationship between men, animals, and the universe, meaning that the perfect design was rendered imperfect. A spoiled design is not a bad design.

Beyond such theodicy-tinged debates, ID as science makes no claims about an omnipotent or omniscient creator.

From a scientific perspective, a cosmic designer could, in principle, be an imperfect designer and, therefore, create a less than perfect design; indeed, that was precisely the view of many who held to or adapted Plato’s idea of the Demiurge. So, even if one rejects or abandons theism, the “bad design” argument still does not offer a challenge to ID theory as a scientific endeavor.
The real scientific question is this: Is there any evidence for design in nature? Or, if you like, is a design inference the most reasonable conclusion based on the evidence?
 
Last edited:

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
25] Intelligent Design proponents deny, without having a reason, that randomness can produce an effect, and then go make something up to fill the void
ID proponents do not deny that “randomness can produce an effect.” For instance, consider the law-like regularity that unsupported heavy objects tend to fall. It is reliable; i.e. we have a mechanical necessity at work — gravity. Now, let our falling heavy object be a die. When it falls, it tumbles and comes to rest with any one of six faces uppermost: i.e. high contingency. But, as the gaming houses of Las Vegas know, that contingency can be (a) effectively undirected (random chance), or (b) it can also be intelligently directed (design).

Also, such highly contingent objects can be used to store information, which can be used to carry out functions in a given situation.

For example we could make up a code and use trays of dice to implement a six-state digital information storing, transmission and processing system. Similarly, the ASCII text for this web page is based on electronic binary digits clustered in 128-state alphanumeric characters. In principle, random chance could produce any such message, but the islands of functional messages will as a rule be very isolated in the sea of non-functional, arbitrary strings of digits, making it very hard to find functional strings by chance.

ID thinkers have therefore identified means to test for objects, events or situations that are credibly beyond the reach of chance on the gamut of our observed cosmos. (For simple example, as a rule of thumb, once an entity requires more than about 500 – 1,000 bits of information storage capacity to carry out its core functions, the random walk search resources of the whole observed universe acting for its lifetime will probably not be adequate to get to the functional strings: trying to find a needle in a haystack by chance, on steroids.)
Now, DNA for instance, is based on four-state strings of bases [A/C/G/T], and a reasonable estimate for the minimum required for the origin of life is 300,000 – 500,000 bases, or 600 kilo bits to a million bits. The configuration space that even just the lower end requires has about 9.94 * 10^180,617 possible states. So, even though it is in principle possible for such a molecule to happen by chance, the odds are not practically different from zero.

But, intelligent designers routinely create information storage and processing systems that use millions or billions of bits of such storage capacity. Thus, intelligence can routinely do that which is in principle logically possible for random chance, but which would easily empirically exhaust the probabilistic resources of the observed universe.

That is why design thinkers hold that complex, specified information (CSI), per massive observation, is an empirically observable, reliable sign of design.

 

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
26] Dembski’s idea of “complex specified information” is nonsense
First of all, the concept of complex specified information (CSI) was not originated by Dembski. For, as origin of life researchers tried to understand the molecular structures of life in the 1970′s, Orgel summed up their findings thusly:

Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity. [ L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.]

In short, the concept of complex specified information helped these investigators understand the difference between (a) the highly informational, highly contingent functional macromolecules of life and (b) crystals formed through forces of mechanical necessity, or (c) random polymer strings. In so doing, they identified a very familiar concept — at least to those of us with hardware or software engineering design and development or troubleshooting experience and knowledge.

Namely, complex, specified information, shown in the mutually adapted organization, interfacing and integration of components in systems that depend on properly interacting parts to fulfill objectively observable functions. For that matter, this is exactly the same concept that we see in textual information as expressed in words, sentences and paragraphs in a real-world language.

Furthermore, on massive experience, such CSI reliably points to intelligent design when we see it in cases where we independently know the origin story.

What Dembski did with the CSI concept in the following two decades was to:

(i) recognize CSI’s significance as a reliable, empirically observable sign of intelligence,

(ii) point out the general applicability of the concept, and

(iii) provide a probability and information theory based explicitly formal model for quantifying CSI.
 

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
27] The Information in Complex Specified Information (CSI) Cannot Be Quantified
That’s simply not true. Different approaches have been suggested for that, and different definitions of what can be measured are possible.

As a first step, it is possible to measure the number of bits used to store any functionally specific information, and we could term such bits “functionally specific bits.”

Next, the complexity of a functionally specified unit of information (like a functional protein) could be measured directly or indirectly based on the reasonable probability of finding such a sequence through a random walk based search or its functional equivalent. This approach is based on the observation that functionality of information is rather specific to a given context, so if the islands of function are sufficiently sparse in the wider search space of all possible sequences, beyond a certain scope of search, it becomes implausible that such a search on a planet wide scale or even on a scale comparable to our observed cosmos, will find it. But, we know that, routinely, intelligent actors create such functionally specific complex information; e.g. this paragraph. (And, we may contrast (i) a “typical” random alphanumeric character string showing random sequence complexity: kbnvusgwpsvbcvfel;’.. jiw[w;xb xqg[l;am . . . and/or (ii) a structured string showing orderly sequence complexity: atatatatatatatatatatatatatat . . . [The contrast also shows that a designed, complex specified object may also incorporate random and simply ordered components or aspects.])

Another empirical approach to measuring functional information in proteins has been suggested by Durston, Chiu, Abel and Trevors in their paper “Measuring the functional sequence complexity of proteins”, and is based on an application of Shannon’s H (that is “average” or “expected” information communicated per symbol: H(Xf(t)) = -∑P(Xf(t)) logP(Xf(t)) ) to known protein sequences in different species.

A more general approach to the definition and quantification of CSI can be found in a 2005 paper by Dembski: “Specification: The Pattern That Signifies Intelligence”.

For instance, on pp. 17 – 24, he argues:
define ϕS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [χ] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 logarithm of the conditional probability P(T|H) multiplied by the number of similar cases ϕS(t) and also by the maximum number of binary search-events in our observed universe 10^120]


χ = – log2[10^120 ·ϕS(T)·P(T|H)].

To illustrate consider a hand of 13 cards with all spades, which is unique. 52 cards may have 635 *10^9 possible combinations, giving odds of 1 in 635 billions as P(T|H). Also, there are four similar all-of-one-suite hands, so ϕS(T) = 4. Calculation yields χ = -361, i.e. < 1, so that such a hand is not improbable enough that the – rather conservative — χ metric would conclude “design beyond reasonable doubt.” (If you see such a hand in the narrower scope of a card game, though, you would be very reasonable to suspect cheating.)

Debates over Dembski’s models and metrics notwithstanding, the basic point of a specification is that it stipulates a relatively small target zone in so large a configuration space that the reasonably available search resources — on the assumption of a chance-based information-generating process — will have extremely low odds of hitting the target. So low, that random information generation becomes an inferior and empirically unreasonable explanation relative to the well-known, empirically observed source of CSI: design.
 

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
28] What about FSCI [Functionally Specific, Complex Information] ? Isn’t it just a “pet idea” of some dubious commenters at UD?
Not at all. FSCI — Functionally Specific, Complex Information or Function-Specifying Complex Information (occasionally FCSI: Functionally Complex, Specified Information) – is a descriptive summary of the particular subset of CSI identified by several prominent origins of life [OOL] researchers in the 1970′s – 80′s. For at that time, the leading researchers on OOL sought to understand the differences between (a) the highly informational, highly contingent functional macromolecules of life and (b) crystals formed through forces of mechanical necessity, or (c) random polymer strings. In short, FSCI is a descriptive summary of a categorization that emerged as pre-ID movement OOL researchers struggled to understand the difference between crystals, random polymers and informational macromolecules.

Indeed, by 1984, Thaxton, Bradley and Olson, writing in the technical level book that launched modern design theory, The Mystery of Life’s Origin [Download here], in Chapter 8, could summarize from two key origin of life [OOL] researchers as follows:

Yockey [7] and Wickens [5] develop the same distinction [as Orgel], explaining that “order” is a statistical concept referring to regularity such as might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of

specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future. [TMLO, (Dallas, TX: Lewis and Stanley reprint), 1992, erratum insert, p. 130. Emphases added.]
The source of the abbreviation FSCI should thus be obvious – and it is one thing to airily dismiss blog commenters; it is another thing entirely to have to squarely face the result of the work of men like Orgel, Yockey and Wickens as they pursued serious studies on the origin of life. But also, while the cluster of concepts came up in origin of life studies, these same ideas are very familiar in engineering: engineering designs are all about stipulating functionally specific, complex information. Indeed, FSCI is a hallmark of engineered or designed systems.


So, FSCI is actually a functionally specified subset of CSI, i.e. the relevant specification is connected to the presence of a contingent function due to interacting parts that work together in a specified context per requirements of a system, interface, object or process. For practical purposes, once an aspect of a system, process or object of interest has at least 500 – 1,000 bits or the equivalent of information storing capacity, and uses that capacity to specify a function that can be disrupted by moderate perturbations, then it manifests FSCI, thus CSI. This also leads to a simple metric for FSCI, the functionally specified bit; as with those that are used to display this text on your PC screen. (For instance, where such a screen has 800 x 600 pixels of 24 bits, that requires 11.52 million functionally specified bits. This is well above the 500 – 1,000 bit threshold.)

On massive evidence, such cases are reliably the product of intelligent design, once we independently know the causal story. So, we are entitled to (provisionally of course; as per usual with scientific work) induce that FSCI is a reliable, empirically observable sign of design.
 

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
30] William Dembski “dispensed with” the Explanatory Filter (EF) and thus Intelligent Design cannot work
This quote by Dembski is probably what you are referring to:

I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection.

In a nutshell: Bill made a quick off-the-cuff remark using an unfortunately ambiguous phrase that was immediately latched-on to and grossly distorted by Darwinists, who claimed that the “EF does not work” and that “it is a zombie still being pushed by ID proponents despite Bill disavowing it years ago.” But in fact, as the context makes clear – i.e. we are dealing with a real case of “quote-mining” [cf. here vs. here] — the CSI concept is in part based on the properly understood logic of the EF. Just, having gone though the logic, it is easier and “clearer” to then use “straight CSI” as an empirically well-supported, reliable sign of design.

In greater detail: The above is the point of Dembski’s clarifying remarks that: “. . . what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable.”[For illustrative instance, contextually responsive ASCII text in English of at least 143 characters is a “reasonably good example” of CSI. How many cases of such text can you cite that were wholly produced by chance and/or necessity without design (which includes the design of Genetic Algorithms and their search targets and/or oracles that broadcast “warmer/cooler”)?]

Dembski responded to such latching-on as follows, first acknowledging that he had spoken “off-hand” and then clarifying his position in light of the unfortunate ambiguity of the phrasal verb dispensed with:

In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter in favor of just going with straight-up specified complexity. On further reflection, I think the Explanatory Filter ranks among the most brilliant inventions of all time (right up there with sliced bread). I’m herewith reinstating it — it will appear, without reservation or hesitation, in all my future work on design detection.

[….]

I came up with the EF on observing example after example in which people were trying to sift among necessity, chance, and design to come up with the right explanation. The EF is what philosophers of science call a “rational reconstruction” — it takes pre-theoretic ordinary reasoning and attempts to give it logical precision. But what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable. In THE DESIGN OF LIFE (published 2007), I simply go with SC. In UNDERSTANDING INTELLIGENT DESIGN (published 2008), I go back to the EF. I was thinking of just sticking with SC in the future, but with critics crowing about the demise of the EF, I’ll make sure it stays in circulation.

Underlying issue: Now, too, the “rational reconstruction” basis for the EF as it is presented (especially in flowcharts circa 1998) implies that there are facets in the EF that are contextual, intuitive and/or implicit. For instance, even so simple a case as a tumbling die that then settles has necessity (gravity), chance (rolling and tumbling) and design (tossing a die to play a game, and/or the die may be loaded) as possible inputs. So, in applying the EF, we must first isolate relevant aspects of the situation, object or system under study, and apply the EF to each key aspect in turn. Then, we can draw up an overall picture that will show the roles played by chance, necessity and agency.

To do that, we may summarize the “in-practice EF” a bit more precisely as:

1] Observe an object, system, event or situation, identifying key aspects.

2] For each such aspect, identify if there is high/low contingency. (If low, seek to identify and characterize the relevant law(s) at work.)

3] For high contingency, identify if there is complexity + specification. (If there is no recognizable independent specification and/or the aspect is insufficiently complex relative to the universal probability bound, chance cannot be ruled out as the dominant factor; and it is the default explanation for high contingency. [Also, one may then try to characterize the relevant probability distribution.])

4] Where CSI is present, design is inferred as the best current explanation for the relevant aspect; as there is abundant empirical support for that inference. (One may then try to infer the possible purposes, identify candidate designers, and may even reverse-engineer the design (e.g. using TRIZ), etc. [This is one reason why inferring design does not “stop” either scientific investigation or creative invention. Indeed, given their motto “thinking God's thoughts after him,” the founders of modern science were trying to reverse-engineer what they understood to be God's creation.])

5] On completing the exercise for the set of key aspects, compose an overall explanatory narrative for the object, event, system or situation that incorporates aspects dominated by law-like necessity, chance and design. (Such may include recommendations for onward investigations and/or applications.)
 

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
31] Intelligent Design Tries To Claim That Everything is Designed Where We Obviously See Necessity and Chance
Intelligent Design has never claimed anything like that. Design is just a supplementary causal mechanism, empirically observed in human behavior, which can explain some observed aspects of things that cannot be explained in other ways. Let’s quote Behe on that:

Intelligent design is a good explanation for a number of biochemical systems, but I should insert a word of caution. Intelligent design theory has to be seen in context: it does not try to explain everything. We live in a complex world where lots of different things can happen. When deciding how various rocks came to be shaped the way they are a geologist might consider a whole range of factors: rain, wind, the movement of glaciers, the activity of moss and lichens, volcanic action, nuclear explosions, asteroid impact, or the hand of a sculptor. The shape of one rock might have been determined primarily by one mechanism, the shape of another rock by another mechanism.Similarly, evolutionary biologists have recognized that a number of factors might have affected the development of life: common descent, natural selection, migration, population size, founder effects (effects that may be due to the limited number of organisms that begin a new species), genetic drift (spread of “neutral,” nonselective mutations), gene flow (the incorporation of genes into a population from a separate population), linkage (occurrence of two genes on the same chromosome), and much more. The fact that some biochemical systems were designed by an intelligent agent does not mean that any of the other factors are not operative, common, or important.


And:

I think a lot of folks get confused because they think that all events have to be assigned en masse to either the category of chance or to that of design. I disagree. We live in a universe containing both real chance and real design. Chance events do happen (and can be useful historical markers of common descent), but they don’t explain the background elegance and functional complexity of nature. That required design.

So, it is absolutely not true that ID claims that “everything is designed”. Indeed, a main purpose of ID is exactly to find ways to reasonably distinguish between what is designed and what is not.
 

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
32] What types of life are Irreducibly Complex? Or which life is not Irreducibly Complex?
Irreducible Complexity is a property of some machines, not of life itself. Many biological machines which are essential components of all life are Irreducibly Complex (IC). Not all components of living organisms are IC nor do they necessarily exhibit Complex Specified Information (CSI). For, following Behe, such an entity will be irreducibly complex if and only if: its core functionality relies on a multi-part interacting set of mutually co-adapted, interacting components.

The fact that a biological machine is IC (e.g. as shown through genetic knockout studies of the components of, e.g., the bacterial flagellum) implies that it cannot be reasonably the product of direct Darwinian pathways operating through selection of the observed function. This is because the function only emerges when the whole machine is already there. The real question is whether unguided Darwinian processes in whatever form, can produce IC machines through indirect pathways; for instance by co-option (producing sub-components which can be selected for a different function, and then “co-opted” for the final function). But any such indirect pathway should be explicitly modeled, and shown to be in the range of what unguided evolution can reasonably do.

A direct Darwinian pathway implies that the steps are selected for the improvement of the same function we find in the final machine. But, IC makes a direct Darwinian pathway impossible. So, only two possibilities are left: either (i) sudden appearance of the complete machine (practically impossible for statistical considerations), or (ii) step by step selection for different functions and co-optation to make a novel function, with this final function completely invisible to natural selection up to the final step.

We should also bear in mind that most – or at least a great many — biological machines in the cell, and most – or at least a great many — macroscopic machines in multicellular beings, are probably IC. (This is a point that Darwinists tend to bypass.)

Darwinists may believe in indirect Darwinian pathways, because it’s the only possible belief which is left for them, but it’s easy to see that it really means believing in repeated near-impossibilities. There is no reason in the world, either logic or statistical, why many complex functions should emerge from the sums of simpler, completely different functions. And even granted that, by incredible luck, that could happen once, how can one believe that it happened millions of times, for the millions (yes, I mean it!) of different IC machines we observe in living beings? The simple fact that Darwinists have to adopt arguments like co-option and indirect pathways to salvage their beliefs is a clear demonstration of how desperate they are.
 

NkrumahWasRight Is Wrong

Veteran
Supporter
Joined
May 1, 2012
Messages
46,298
Reputation
5,839
Daps
93,887
Reppin
Uncertain grounds
I'm just completing the basic arguments against ID breh, people try to simplify ID as believing in an Abrahaimac figure, when to the contrary, ID is so much more complicated and well thought out

I feel u on that..I was just sayin let people comment and if/when they debate then quote and source the article. Thread might lose attention if people come in here and just see endless blocks of text off the bat. I think it was a great OP and good change of pace on religious threads.
 

blackzeus

Superstar
Joined
May 19, 2012
Messages
21,666
Reputation
2,825
Daps
43,529
For those who want to give biology lessons:

33] In the Flagellum Behe Ignores that this Organization of Proteins has Verifiable Functions when Particular Proteins are Omitted, i.e. in its simplest form, a protein pump
Irreducible complexity means that the function of a complex machine is not maintained if we take away any of its core parts; e.g. as Minnich did for the bacterial flagellum. In other words, it means that there is no redundancy in the core of the machine, and that none of the parts or sub-assemblies can retain the function of the whole.

So, despite the TTSS (Type Three Secretory System) objection and the like, the flagellum, therefore, still credibly is irreducibly complex.

Behe’s main argument is that IC machines like the flagellum cannot reasonably be the product of direct Darwinian pathways, because the function only emerges when the machine is wholly assembled, and therefore cannot be selected before. That is supported by the observation that there are no technically detailed descriptions of such pathways in the scientific literature; which remains the case now over a decade since his observation was first published in 1996 in Darwin’s Blackbox. So, Darwinists have tried to devise for IC machines indirect Darwinian pathways, using the notion of cooption, or exaptation, which more or less means: even if the parts or sub-assemblies of the machine cannot express the final function, they can have different functions, be selected for them, and then be coopted for the new function.

The TTSS is suggested as an example of such a possible co-opted mechanism. The Darwinist argument is that there is strong homology between the proteins of the TTSS and a subset of the proteins of the flagellum which are part of a substructure in the basal body of the flagellum itself. Therefore, the flagellum could have reutilized an existing system.

The hypothesis has some empirical basis in the homology between the two systems: but that should not surprise us, because both the TTSS and the “homologue” subset in the flagellum accomplish a similar function: they pump proteins through a membrane. So, it is somewhat like saying that an airplane and a cart are similar because both have wheels. It is true, but an airplane is not a cart. For, the flagellum is not a TTSS; it is much more. And the sub-machine which pumps proteins in the basal body of the flagellum is similar to, but not the same as the TTSS.

Moreover, it is relevant to observe that the TTSS is used by prokaryote – non-nucleus based — bacteria to prey upon much more complex eukaryote – nucleus-based — cells, which appeared after the prokaryotes with flagella (i.e. the bacterial flagellum). In short, there is an obvious“which comes first?” counter-challenge to the objection: it is at least as credible to argue that the TTSS is a devolution, than that it is a candidate prior functional sub-assembly.

To sum up:

  1. A lot of the proteins in the flagellum have no explanation on the basis of homologies to existing bacterial machines, or of partial selectable function.

  2. Even if the functions of the TTSS and of the sub-machine in the flagellum are similar, the two machines are in fact different,and the proteins in the two machines are not the same. Homology does not mean identity.

  3. Most importantly, TTSS arguments notwithstanding: the overall function of the flagellum cannot be accomplished by any simpler subset. That means that the flagellum is irreducibly complex.

  4. Explaining the evolution of the flagellum by indirect pathways would imply explaining all its parts on the basis of partial selectable functions, and explaining also their production, regulation, assemblage, compatibility, and many other complex engineering adaptations. In other words, even if you have wheels and seats, engines and windows, you are still far away from having an airplane.
Finally, it is still very controversial if the flagellum appeared after the TTSS, or rather the opposite; in which case the TTSS could easily be explained as a derivation from a subset of the flagellum, retaining the pump function with loss of the higher level functions. And anyway, the TTSS itself is irreducibly complex.

 
Top