Nonscience

From an article about the crisis in science: “Perhaps those who engage in … ‘shoddy science’ or even ‘sleazy science’ don’t even know that it is sub-standard.” Unfortunately, that seems to be the case. At first, it didn’t really occur to me what I was seeing wasn’t just the product of dishonesty. (I still think it must be a combo of both stupidity and dishonesty). The selection process has favoured a lack of critical thinkers, and a lack of true curiosity, and a pseudoscientific culture filled with pseudo-questions and junk data that flows out unchecked. This culture is apparently widespread, even dominating clinical research, which is appalling.

Μολις είπαν στο ΣΚΑΙ, (γιά πολιτικό, νομίζω γιά τον Τσίπρα) “Χαζος, η απατεωνας?” Πάγια η ερώτηση.

About the science crisis

Lots being written about the crisis in science. Some comments here: http://www.firstthings.com/article/2016/05/scientific-regress echo what I’ve been observing in vision science.

If peer review is good at anything, it appears to be keeping unpopular ideas from being published. [as I learned after a dozen reviewed letter attempts].

As the authors put it, “some non-reproducible preclinical [cancer] papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis.” What they do not mention is that once an entire field has been created—with careers, funding, appointments, and prestige all premised upon an experimental result which was utterly false due either to fraud or to plain bad luck—pointing this fact out is not likely to be very popular. Peer review switches from merely useless to actively harmful. It may be ineffective at keeping papers with analytic or methodological flaws from being published, but it can be deadly effective at suppressing criticism of a dominant research paradigm.

Here’s what I had written in a comment on Solomon, May & Tyler (2016):

“I think it’s fairly clear that at least some research in the field of visual perception is performed in the context of self-referential dogmas that can inspire experiments (using a limited, carefully selected set of stimuli) that produce results analyzed and described in terms of the preferred framework, without ever challenging that framework, and even though the latter may well be logically untenable and empirically falsifiable (or, indeed, falsified). Obviously, this isn’t a recipe for progress.” 

In many fields, it’s common for an established and respected researcher to serve as “senior author” on a bright young star’s first few publications, lending his prestige and credibility to the result, and signaling to reviewers that he stands behind it.

And quotes from D. Funder quoted here: https://fabiusmaximus.com/2016/04/19/replication-crisis-in-science-95394/

Yet it [nhst, or other flawed ideas] persists, even though – and, ironically, perhaps because – it has never really been explicitly defended! … Eventually the critiques die down.  Nothing changes….The defenders of the status quo rarely actively defend anything. [no need for “verbal theorising!] ..Instead they will just continue to publish each others’ work in all the “best” places, hire each other into excellent jobs and, of course, give each other awards.  This is what has happened every time before.

The status quo doesn’t have to argue if their goal is power and they already have it. If their goal were good science it would be another story.

 

The image of the thing is not the thing

ma-51791545-WEB

In addition from the general mindlessness of contemporary vision science, there’s the persistent conceptual failure built into the conversation. This is the failure to make the key distinction between the percept, the stimulus, and the real world. The real world is, in effect, treated as the stimulus, to be detected and analysed, thus skipping over the real problems of perception, i.e. the problems that the Gestaltists were the first to appreciate and address.

It has, in other words, to do with treating an image – or more accurately a projection to the retina – as a thing, and treating the problem as one of detection of that thing, and its features, rather than of interpretation of a planar set of isolated points differing in luminance. With not appreciating that there are objects in perception (and in the world), but not in the stimulus.

Certain authors talk about “feature detection,” as though features were prior to, rather products of, the completed perceptual process. (nts: Don’t forget to look at Treisman).  They then “explore” how good people are at detecting that thing/features thereof, and how much their descriptions match/diverge from the supposed reality when various factors pertaining to the supposed reality are varied. Similarly, there is reference to “signals” and signal detection theory. In this case, the attempt to frame the problems of vision in terms of SDT has led to the comical assumption (because they need a noise term) that there is internal “noise” that mediates the percept and that this can be “added” externally by fuzzing up the image. The supposed signal is the form the investigators have in mind (Pelli’s letters) and the supposed noise are the changes they make to make it harder to see. They don’t realise that “signal” is the whole shebang and that it all determines the percept, based on rules of organisation. They measure accuracy, and then make ad hoc explanations of the performance, based on the particular stimulus and what they preordained about its structure.

So what are treated as causal variables – shapes, features, illumination, etc are actually effects. The confusion is multiplied when we’re dealing with images, and even more with simulations, of both objects and illumination. These are referred to as though they were the actual things, which seems uncomplicated as long as they look like the things the investigators have in mind, but continues even as the actual stimulus is altered in ways that may change the shapes in the percept. Thus, Ivanov et al refer to an image that is foreshortened, (whereas what is foreshortened is the shape of the underlying object) and say that this foreshortening “cue” is used to infer slant. In the case of a shape whose foreshortened projection was seen as fronto-parallel (for example) they would (presumably) say that the “foreshortening cue” was present but not used, and thus the shape was inaccurately perceived. The notion of measuring “accuracy” of the percept is misconceived. It’s related to the notion of “constancy”, and of “measuring constancy.”

“How does the visual system achieve constancy?” Again, the notion is one of, here we have the thing, and how do we manage to DETECT it under various conditions, e.g. changing viewpoint, illumination, etc. What “cues” we use to detect the object, how do we detect and  combine features of the object to reproduce the object? The underlying object is taken for granted (the “experience error”?), is taken for the “cause” and the true causes are completely overlooked. The physical world around us is the cause of the projection but is not present the projection. There are no objects and no feature of objects. A complex series of integrated assumptions is needed to segregate, unify and shape the parts of the visual field. Except for marginally, in the sense of objective set, the notion of constancy doesn’t help understand this, but rather obscures the problem. Constancy and lack of constancy are on a par; both depend on how the field is organised and perceptually shaped at the moment.

So-called shape-from-shading, the concept and the research based on it, is an example of this confusion. How (it is asked) do we achieve constancy of shape when shading is so inconstant? In other words, why do we see the same shapes when the “cues” change so much? There must be some “invariants.” What they don’t see, or downplay, is that you can create underlying shapes (the 3D landscape on which you then simulate illumination) that will not be constant under changing illumination. The simplest example of this, of course, is the change we see with bumps/indentations when we changes the illumination from from above to from below. The explanation that this is just because of a top-lighting assumption doesn’t change the fact that we have a “breakdown of constancy.” Further, this explanation doesn’t explain faces, for example, become unintelligible with bottom lighting. Constancy and lack of constancy depend on the same principles; trying to explain constancy on the basis of stimuli that produce constancy can only result in inadequate, ad hoc explanations. (In general, the choice of stimuli that can only confirm otherwise easily falsifiable claims, and using the effects of these stimuli to elaborate on those false claims is the name of the game right now. So, for example, we can say that the visual system “uses contextual cues to achieve constancy.” We then speculate on what cues were used, in a particular set-up. We mildly refer to some contradictory evidence in the literature. We speculate that if we can’t find cues there must be some higher level invariants. We conclude that more research is needed.)

Egan and Todd (effects of smooth occlusions…) imagine a surface with various bulges and indentations, viewed from a certain vantage point. They make a map of the relative depths of points of this surface, and they want to see whether observers map the surface in a way that matches their schema. They also imagine that the surface is illuminated in various ways, because they are interested in the effects of illumination direction on how the surface is perceived. They’ll then say things like, perceived depth was sheared in the direction of illumination, or that “occluding contours” had a negligible effect on same.

The problem with describing perceived depth as a function of the structure of the illumination is that saying that surface “x” is illuminated in manner “y “says absolutely nothing about the luminance structure of the projection, which constitutes the relevant fact that engages the visual system. The term “shape-from-shading” has a similar problem; it treats “shading” as an independent fact and feature of the stimulus, when in fact its apparent presence is the product of visual processing. Describing a stimulus as a variously-shaded homogeneous surface, and the observer as judging depth based on the shading, also says nothing about the luminance structure of the projection, i.e. it says nothing about the information the visual system has to work with. The luminance pattern corresponding to the description “an image (projection) of an illuminated real-world object” is completely unconstrained, even if we specify the objective shape of the visible sides of the real-world object. Completely unconstrained. Referring to the study of “lightness constancy” is at the heart of the problem.

 

 

 

 

 

Air-conditioned nightmare

People say they go to Florida for the climate, but maybe they should say they go for the climate control! Even while the weather is neither too hot or too humid, even when its perfect, they’re in their cars, in their houses, with the AC on constantly. The sound of outdoor central (?) AC units is constant. (I guess that may be why people volunteer for Mars missions  – they’d spend their lives in a climate-controlled cubicle – perfect!)

Roads everywhere, neighbourhoods included, are double-wide, amplifying the flatness of the landscape. It’s Levittown with broader streets, palms instead of maples. I guess because of the part-time nature of many of the residents the even more car-oriented culture, and the huge distances to get anywhere, the neighbourhood is even more attenuated than elsewhere. Except in the very touristy areas, there are almost no cafes and no outdoor seating.

Nice

HUDSON: Since the 2008 crash the government has guaranteed almost all new mortgage loans. Up to 43% of the borrower’s income, that was guaranteed. Student loans, all guaranteed. But basically the banks made money abroad. If you could borrow at one-tenth of a percent from the Federal Reserve, you could buy Brazilian loans, bonds paying 9% or more. You could gamble on writing default swaps in Greece.

And when Greece had real problems, the fact that the German and French banks had made too many loans to it, the IMF was going to write down the Greek debt. But then Geithner got on the phone with Europe, and Obama went to the G20 meetings and said, “Look, you can’t write off the Greek debt, because the American banks have essentially turned into horse race betters. We have casino capitalism. They have bet and promised to guarantee, the Greek bonds. If the Greek bonds are written down, the American banks will go under. And if we go under, we promise we’re going to bring you down too. We’re going to bring down the European banks. Do you really want that to happen?”

So the gambles made by Wall Street ended up almost driving Greece out of the European Union. Wall Street was willing to tear Europe apart politically just for the Wall Street investment banks – basically four banks – to make gains by insuring the Greek debt, by treating the financial market like a horse race.

vsbs

The Schmidtmann et al article, via its refs and the “similar articles” thingy has opened up another can of worms. The amount of crap that’s clogging the arteries of this field is as extraordinary as the lack of integrity. The problems I flagged on that paper and on the Solomon et al, that seemed so blatant, I now see are part of a long-standing tradition boasting many pseudotechniques and pseudoconcepts, used as though they were real currency. (Le faux-monnayeurs). So they are baffled why views so widely-accepted could be questioned, such an eventuality has never entered their minds. (Even selective reporting of data is apparently an unimpeachable non-offence – I got a “mind your business” reaction from CWT).

This latest vein is a rich and nauseating one, I’ve commented on a few more from it. It is the psychophysics tradition, unreformed, exaggerated, spun out of control – impossible to overstate this. Casual references to “linking hypotheses” got me googling and I hit upon an article by a D. Teller, who decades ago was cautioning against the prevailing empirical lassitude – i.e. an insider confirms many of the concerns noticed by an outsider (me).

Reference to the popularity of an idea or technique lieu of argument, references that in no way confirm the confident assertions they pretend to support, are the methods of an immense idea-laundering scheme in which laughable claims achieve the appearance of broad-based support, licensing further ridiculous elaborations. (Just popped over to VSS presentations list – my god, everything I looked at is junk. It’s a fraud factory. Gil has only poster sessions, and no mention of AT in the abstracts. BR skipping this meeting, oddly, as   SA is going). And a pertinent quote from Popper (footnote to Ch. 11, O.S.), saying that we have to fight systems which “tend to bewitch and to confuse us.” “We have to take the trouble to analyse the systems in some detail; we must show that we understand what the author means, but that what he means is not worth the effort to understand it.” You have to do it but it’s not worth the trouble – this is so true, such a waste.

Toying with the idea of a pp topic just collecting quotes, with refs, showing the “popularity” criterion of empirical weight.

I keep going after one after the other etc. They’re not disconnected, though; the bigger picture comes into focus, everything connects. The absence of shape, the absence of conceptual order, the absence of the aim toward truth. (Note to self: Pizlo has confused the issue a bit, but the value of acknowledging the veridicality of vision is that it helps select against certain types of explanation, types of mechanisms that wouldn’t serve this function).

In the interest of the bigger picture, I want to try and overlook many of the atrocities in these papers and emphasise the absence and cost of failing to consider the problem of organisation. One aspect is the lightness issue, and a comment on Todd and Egan; then there is the newfound SDT “tradition” and I’m thinking I’ll have a go at the Solomon and Pelli Nature paper – it was helpful that Pelli has an online lecture discussing it! Because it’s Nature, and he refers to it somewhere as “seminal.”

Psychometric dysfunction

Schmidtmann et al (2015) η επομενη μου κριτική. Το προσεξα επειδη περιειχε την λεξι τυχαιοτητα στον τιτλο, ορος στον οποιον εχω ευαισθητοποιηθη χαρη στην ιστορια με τους τρελους μπειζιανς. Βρηκα λοιπον, μια νεα φλεβα ασκοπης, αμυαλης ψευδοερευνας. (Μολις δημοσιευσε ενα και ο Tyler). Ειναι να τραβας τα μαλλια σου, εαν σου εχουν μεινει απο τους προηγουμενους γύρους. Τους εστειλα μερικες ερωτησεις, και απαντησαν, αλλα ειναι σαν να μην μπορουνε να αντιληφθουν της αντιφασεις και τα κενα. Τελος παντων, αυριο θα ανεβασω το σχολιο, και αυτο θα ειναι αυτο, που λενε οι αμερικανοι.

Παντως, και αυτα, μπορουνε να πουμε οτι ειναι συμπτωματα οχι μονο βλακειας (πρωταπ’ολα) αλλα και τις ελλειψης αρχων, των αρχων του shape. Νομιζω οτι πρεπει να ασχοληθω πρωτα με τους Todd and Egan, και μετα με τον Tyler, και οπωσδιποτε με το πιο γενικο θεμα, και ας μην εχω εμπνευση. Εχω να κανω πολλες τρυπες στο νερο.

Purves cont’d

PP is featuring my comments on “Perception and Reality” which was nice after some mind-numbing, obsessive activity on that and a more recent article (Props of artificial networks etc.) (Φωτιστηκα, ημερα των Θεωφανειων, αυτη τη δουλεια εκανα…) I think they chose it because it pointed out that the authors acknowledged a previous criticism that, of course, applies to the whole series, and notes the incredible sloppiness of the method (which miraculously produces the desired results regardless). Plus the connection with the comprehensive critique off Purves and Yang.

This practice of acknowledging serious problems with a study at the very end, as an afterthought, has become very common. Journals will take anything by a “name” as long as they disingenuously acknowledge their flaws. In some contexts acknowledging your weaknesses is a strength, but here it’s just a cover.

After I comment on Kwan (Kwon?) et al, (a case where the comment will be far more interesting than the article) I want to concentrate seriously on shaping up my Prob/Princ article-maybe-to-be. Not feeling inspired, though the topic is inspiring.

Purves crit

Regarding the SC example: The most salient problem is probably that the info he claims is inaccessible and thus unavailable to the visual system is the info he argues it relies on to produce percepts. Shouldn’t this be obvious? The style of argument masks the problem. Roughly, it is as follows:

1. Luminance is the product of illumination and reflectance. Thus the two latter environmental facts are confounded. (Note that here, the limitation being flagged by Purves is that we lack access to R and I, not to L.)

2. Perception of relative lightness is often not correlated with relative luminance.

3. From Step 2, he jumps, unjustifiably to state that we have no access to physical values in the environment. The fact that luminance isn’t always or necessarily correlated with lightness (a percept) doesn’t mean that luminance information is not available to the visual system. It just means that teasing out lightness/reflectance requires an indirect process. But, in order to reject other accounts of how perception works, Purves parlays the illumination/reflectance ambiguity into a claim that no physical info is available to the organism at all.

4. While rejecting access to luminance in order to reject other accounts, he unavoidably bases his own account on the visual system’s having access to to luminance values, to an extent that even exceeds the position of accounts that acknowledge the availability of luminance and contradicts what we know about the visual system. Specifically, his story depends on organisms registering and recording, over minutes, hours, days, lifetimes and generations, absolute and relative luminance. But the visual system is designed to respond to relative, not absolute, luminance (in response, of course, to the absolute intensities hitting the retina). The  experimental sampling is, of course, based on absolute luminance values, so these obviously are supposed to play a role in the process.

So anyway, the story seems to be that every luminance pattern from every second of the species existence was recorded and the intensity of light falling on every point on the retina compared to that falling on every other point throughout the lifetime of the organism, and every organism throughout the lifetime of the species. The relative luminance of these points were compared (in particular points near each other – but since the points form a continuous sheet this means that the number of combinations, and at all possible scales, is enormous. However, all this info is said to be effectively recorded throughout the life of each organism and each species, and not only recorded, but reacted to in some way, and the outcomes of each reaction affects the survival and reproduction of the species. So the nervous system of the organism is able to evaluate and selectively respond to the intensity of light coming from every point in the visual field, and also to the relative intensities of light coming from all points in the visual field, and remember them, as well as react to them, and record the frequencies of these occurrences.

The next part of the story is that the appearance of each point, based on its luminance, in the visual field depends on how often it occurred in association with the luminances of points in its vicinity, and in association with particular patterns of points in its vicinity, near or not so near, patterns simple or not so simple. For each of these luminances, in each of these patterns (which inevitably overlap, intersect, interact) the visual system has a record of past relationships to which it compares this one, and adjusts its frequency record accordingly. A point of luminance x will be given a perceptual label y based on how often points of luminance x appeared in the various contexts which “match” the current one. If they occurred often, then it will appear lighter; if they appeared less often, then it will appear darker.

Comments

Even if it were practicable, what would be the adaptive utility of assigning lightness values in this way?

 

 

Even if a visual system could register and record in this way, this inductive procedure, failing, as it does, to distinguish chance and necessity, would be useless.

The claim that we can’t access luminance/relative luminance flies in the face of many facts, the simplest perhaps being that if we put surface x on homogeneous background y, and progressively increase the luminance of surface x, it will appear  to lighten, and vice versa, in a correlated way, regardless of the lightness of the background.

The problems with the sampling are, of course, legion. Aside from the impossibility of choosing a representative sample, there is also the problem of similarity; the criteria for the “templates” are loose to begin with, and if we add factors (such as the overlap relationship) then our sample would change.

Luminance changes due to shadows and those due to reflectance changes are not distinguished…

Not only illumination is incidental – the patterns on the 2D image are incidental …what does it mean that 2D projections have “highly structured statistics???”

The problem also is that any accumulation

 

Widge

Oh Widgie…nothing makes sense. Nothing does not make sense. Really, I want to understand death. Life and death, how they can coexist. How consciousnesses can just come and go. I think these problems led to the art of meditation. You think, if only I can concentrate enough…I’ll figure it out. You know  you can’t, but you still have to try. And then…Oh darn.

And you just sit there, and nothing seems more important…but you’re just sitting there…