Six months old! Growing up so fast, our little Paisley is. She is smiley and chubby and sleeping well and eating solid food. Seems like only yesterday she was born. She won't be a tiny baby much longer.
Above, Paisley with my llama Alphonse. Below, a comparison of Natalie (left) with Paisley (right) at about the same age. Can you spot the differences between these two babies?
A Blog That Covers And Collects News Reports And Information On Artificial Intelligence, Robots, And Super Computers.
Computational Thinking: Some thoughts about Abduction
One of the striking features of computation is the extent to which forms of pattern matching are required in computer processing. Pattern recognition can be described as a means of identifying repeated shapes or structures which are features of a system under investigation. Whilst we tend to think of patterns as visual, of course they can also be conceptual, iterative, representational, logical, mathematical, etc. in form providing the underlying computational system can be programmed to recognise the distinctive shape of the pattern from the data. They can also consist of meta-patterns as described by Gregory Bateson as patterns that can detected across different spheres, such as culture, humanities, science and the social or 'the pattern that connects' (see Bateson 1979; Dixon 2012). The recognition of patterns and uncovering their relationships in sets of data was called 'abductive reasoning' by Charles Peirce, who contrasted it with inductive and deductive reasoning. Indeed, Peirce described abduction as a kind of logical inference akin to guessing. This he called the leap of abduction where by one could abduce A from B if A is sufficient (or nearly sufficient) but not necessary for B. The possible uses of this within a computational context should be fairly obvious, especially when software is handling partial, fuzzy or incomplete data and needs to generate future probabilistic decision points, or recognise important features or contours in a data set.
Charles Sanders Peirce (1839–1914) argued that pattern matching, which he called abduction or retroduction (he also used the terms presumption or hypothesis), was a type of hypothesis formation. The crucial function of 'a pattern of abduction … consists in its function as a search strategy which leads us, for a given kind of scenario, in a reasonable time to a most promising explanatory conjecture which is then subject to further test' (Schurz 2008, 205). Peirce argued,
Within computer science, and particularly related to the more micro level problem of recognising patterns themselves within data sets automatically using computation, is an important and challenging area of research. The main forms of pattern recognition (we can think of these as patterns to find patterns) used in computation are usually enumerated as template-matching, prototype matching, feature analysis, recognition by components, fourier analysis, and lastly bottom-up and top-down processing. I'll briefly describe each of the six main approaches.
Template Matching: This is where a computational device uses a set of images (or templates) against which it can compare a data set, which might be an image for example (for examples of an image set, see Cole et al. 2004).
Prototype Matching: This form of patten matching uses a set of prototypes, which are understood as an average characteristic of a particular object or form. The key is that there does not need to be a perfect match merely a high probability of likelihood that the object and prototype are similar (for an example, see Antonina et al. 2003).
Feature Analysis: In this approach a variety of approaches are combined including detection, pattern dissection, feature comparison, and recognition. Essentially the source data is broken into key features or patterns to be compared with a library of partial objects to be matched with (for examples, see Morgan n.d.).
Recognition by Components: In this approach objects are understood to be made up of what are called 'geons' or geometric primitives. A sample of data or images is then processed through feature detectors which are programmed to look for curves, edges, etc. or through a geo detector which looks for simple 2D or 3D forms such as cylinders, bricks, wedges, cones, circles, and rectangles (see Biederman 1987).
Fourier Analysis: This form of pattern matching uses algorithms to decompose something into smaller pieces which can then be selectively analysed. This decomposition process itself is called the Fourier transform. For example, an image might be broken down into a set of twenty squares across the image field, each of which being smaller, is made faster to process. As Moler (2004) argues, 'we all use Fourier analysis every day without even knowing it. Cell phones, disc drives, DVDs, and JPEGs all involve fast finite Fourier transforms'. Fourier transformation is also used to generate a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of the digital image. The Fourier components of each square are then rounded to lower arithmetic precision, and weak components are discarded, so that the remaining components can be stored in much less computer memory or storage space. To reconstruct the image, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image, this is why the image can produce 'blocky' or the distinctive digital artefacts in the rendered image, see JPEG (2012).
Bottom-up and Top-down Processing: Finally, in the Bottom-up and Top-down methods an interpretation emerges from the data, this is called data-driven or bottom-up processing. Here the interpretation of a data set to be determined mostly by information collected, not by your prior models or structures being fitted to the data, hence this approach looks for repeated patterns that emerge from the data. The idea is that starting with no knowledge the software is able to learn to draw generalisations from particular examples. Alternatively an approach where prior knowledge or structures are applied data is fitted into these models to see if there is a 'fit'. This approach is sometimes called schema-driven or top-down processing. A schema is a pattern formed earlier in a data set or drawn from previous information (Dewey 2011).
What should be apparent from this brief discussion of the principles of abduction and pattern-matching in computer science is their creative possibilities for generating results from data sets. The ability to generate hypothesises on the basis of data, which is fallible and probabilistic allows for computational devices to generate forecasts and predictions based on current and past behaviours, data collection, models, and images. It is this principle of abductive reason which makes computational reasoning different from instrumental reason, and particularly from the iron-cage of logical implication or programmatic outcome that instrumental reason suggests. Indeed Alexander that the most useful patterns are generative,
Bibliography
Alexander, C. (1964) Notes on the Synthesis of Form, Harvard University Press.
Alexander, C., S. Ishikawa, & M. Silverstein (1977) A Pattern Language, Oxford: Oxford University Press.
Alexander, C. (1979) The Timeless Way of Building, Oxford: Oxford University Press.
Antonina, K., Barbro, B., Hannu, V., Jarmo, t. and Ari, V. (2003) Prototype-Matching System for Allocating Conference Papers, accessed 31/03/2012, http://www.hicss.hawaii.edu/HICSS36/HICSSpapers/DTDMI06.pdf
Appleton, B. (2000) Patterns and Software: Essential Concepts and Terminology, accessed 31/03/2012, http://www.cmcrossroads.com/bradapp/docs/patterns-intro.html
Bateson, G. (1979) Mind and Nature: A Necessary Unity, (Advances in Systems Theory, Complexity, and the Human Sciences). Hampton Press, accessed 31/03/2012, http://www.oikos.org/mind&nature.htm
Biederman, I. (1987) Recognition-by-Components: A Theory of Human Image Understanding, Psychological Review, 1987, Vol. 94, No. 2,115-147, accessed 31/03/2012, http://www.cim.mcgill.ca/~siddiqi/COMP-558-2012/rbc.pdf
Brown, W., Malveau, R., McCormick, H. and Mowbray, T. (1998) AntiPatterns, accessed 31/03/2012, http://www.antipatterns.com/
Cole, L, Austin, D., Cole, L. (2004) Visual Object Recognition using Template Matching, accessed 31/03/2012, http://www.araa.asn.au/acra/acra2004/papers/cole.pdf
Dewey, R. A. (2011) Top-down and Bottom-up Processing http://www.intropsych.com/ch07_cognition/top-down_and_bottom-up_processing.html
Dixon, D. (2012) Analysis Tool or Design Methodology? Is there an epistemological basis for patterns?, in Berry, D. M. (ed.) Understanding Digital Humanities, London: Palgrave.
Eldridge, M. (n.d.) Clarifying the Process of Abduction and Understanding “Inductive” Generalization, accessed 31/03/2012, http://www.philosophy.uncc.edu/mleldrid/SAAP/USC/TP26.html
Janhangir, N. (2008) Genetic Algorithm Driven Template Matching In ActionScript 3.0, accessed 31/03/2012, http://nadimissimple.wordpress.com/2008/12/11/genetic-algorithm-driven-template-matching/
JPEG (2012) JPEG Homepage, accessed 31/03/2012, http://www.jpeg.org/jpeg/index.html
Lea, D. (1977) Christopher Alexander: An Introduction for Object-Oriented Designers, accessed 31/03/2012, http://g.oswego.edu/dl/ca/ca/ca.html
Microsoft (2012) Organizing Patterns, accessed 01/04/2012, http://msdn.microsoft.com/en-us/library/ff647589.aspx
Moler, C. (2004) Numerical Computing with MATLAB, accessed 31/03/2012, http://www.mathworks.se/moler/chapters.html
Morgan, M. (n.d.) Feature Analysis, accessed 31/03/2012, http://www.staff.city.ac.uk/~morgan/FeatureAnalysis.pdf
Peirce, C. S. (1958) The Collected Works of Charles Sanders Peirce, Harvard University Press.
Peirce, C. S. (1988) Pragmatism as the Logic of Abduction, in The Essential Peirce: Selected Philosophical Writings, 1893—1913, Bloomington: Indiana University Press.
Rybczynski, W. (2009) Do You See a Pattern?, Slate, accessed 31/03/2012, http://www.slate.com/articles/arts/architecture/2009/12/do_you_see_a_pattern.html
Schurz, G. (2008) Patterns of Abduction, Synthese, 164 (2): 201-234.
Charles Sanders Peirce (1839–1914) argued that pattern matching, which he called abduction or retroduction (he also used the terms presumption or hypothesis), was a type of hypothesis formation. The crucial function of 'a pattern of abduction … consists in its function as a search strategy which leads us, for a given kind of scenario, in a reasonable time to a most promising explanatory conjecture which is then subject to further test' (Schurz 2008, 205). Peirce argued,
Abduction is the process of forming an explanatory hypothesis. It is the only logical operation which introduces any new idea; for induction does nothing but determine a value, and deduction merely evolves the necessary consequences of a pure hypothesis. Deduction proves that something must be; Induction shows that something actually is operative; Abduction merely suggests that something may be (Pearce 1958: 5.171, original emphasis).Or perhaps better:
The abductive suggestion comes to us like a flash. It is an act of insight, although extremely fallible insight. It is true that the different elements of the hypothesis were in our minds before; but it is the idea of putting together what we had never before dreamed of putting together which flashes the new suggestion before our contemplation (Pearce 1988: 227, original emphasis).In effect, abduction is the process of arriving at an explanatory hypothesis or a process of generating a hypothesis. As Eldridge explains,
For Peirce, abduction works from these surprising facts to determine a possible, plausible explanation. Furthermore, Peirce stresses the fact that the logic of abduction is fallible – abductive inferences, like induction, can, and do, lead us to the wrong result (Pearce 1958 5.189, 5.197, 6.532). However, as a part of the triad, abduction is able to correct itself, once it is investigated by deduction and tested by induction (Pearce 1958 5.574). Because of this, we should never take the conclusion of an abductive inference to be a fact in and of itself until it is tested. Until that point “abduction commits us to nothing…it merely causes a hypothesis to be set down upon our docket of cases to be tried” (Pearce 1958 5.602). Furthermore, by hypothesis, Peirce does not just mean scientific hypotheses. Abduction certainly includes the more formalized, conscious cognitive process of deliberately searching for an explanation to a set of particular facts; however, abduction is also a logical inference used in everyday life from crude hypotheses (his Catholic priest example) to perceptual judgments (understanding the information that we receive from our senses) (Pearce 1958 7.202, 5.180, 5.184) (Eldridge n.d.).Patterns were made popular as a heuristic for thinking about the new problematics introduced by software systems through the work of the architect Christopher Alexander (1936-), particularly Notes on the Synthesis of Form (Alexander 1964), The Timeless Way of Building (Alexander 1979), and A Pattern Language (Alexander et al. 1977) which influenced computer scientists, who found useful parallels between building design and the practice of software design (Rybczynski 2009). Alexander's central premise in his books, 'driving over thirty years of thoughts, actions, and writings, is that there is something fundamentally wrong with twentieth century architectural design methods and practices' (Lea 1997). Indeed, A Pattern Language was originally written to enable any citizen to design and construct their own home although it is arguable that he has had more influence on computer scientists than architects. As Appleton explains, patterns 'are a literary form of software engineering problem-solving [approach] that has its roots in a design movement of the same name in contemporary architecture... [they enable a] common vocabulary for expressing its concepts, and a language for relating them together. The goal of patterns within the software community is to create a body of literature to help software developers resolve recurring problems encountered throughout all of software development' (Appleton 2000).
The Timeless Way of Building and A Pattern Language were written as a pair, with the former presenting rationale and method, and the latter concrete details. They present a fresh alternative to the use of standardized models and components, and accentuate the philosophical, technical and social-impact differences between analytic methods and the adaptive, open, and reflective (all in several senses) approach to design. The term pattern is a preformal construct (Alexander does not ever provide a formal definition) describing sets of forces in the world and relations among them. In Timeless, Alexander describes common, sometimes even universal patterns of space, of events, of human existence, ranging across all levels of granularity. A Pattern Language contains 253 pattern entries. Each entry might be seen as an in-the-small handbook on a common, concrete architectural domain. Each entry links a set of forces, a configuration or family of artifacts, and a process for constructing a particular realization. Entries intertwine these 'problem space', 'solution space', and 'construction space' issues in a simple, down-to-earth fashion, so that each may evolve concurrently when patterns are used in development (Lea 1997).Patterns are therefore reusable, structured, or formalised ways of doing things or processing information and data. Alexander himself defined each pattern as:
a three-part rule, which expresses a relation between a certain context, a problem, and a solution. As an element in the world, each pattern is a relationship between a certain context, a certain system of forces which occurs repeatedly in that context, and a certain spatial configuration which allows these forces to resolve themselves. As an element of language, a pattern is an instruction, which shows how this spatial configuration can be used, over and over again, to resolve the given system of forces, wherever the context makes it relevant. The pattern is, in short, at the same time a thing, which happens in the world, and the rule which tells us how to create that thing, and when we must create it. It is both a process and a thing; both a description of a thing which is alive, and a description of the process which will generate that thing (Alexander 1979: 247).The antithesis to a pattern, is called an anti-pattern, that is patterns that describe (i) a bad solution to a problem which resulted in a bad situation, or (ii) describe how to get out of a bad situation and how to proceed from there to a good solution (Appleton 2000; Brown et al. 1998). Patterns and pattern languages provide a broader framework to think about questions of paradigmatic means of designing and implementing computational systems. Indeed, in many cases patterns are used in this way to indicate a set of means for the development of software at a macro level. It should also be noted that patterns can be combined with other patterns to produce new patterns at a higher level of complexity, indeed this is the idea behind Alexander's (1977) notion of a 'pattern language'. Within software design it is quite common to see three levels noted, namely from most abstract to more concrete: Architectural Patterns, Design Patterns and Implementation Patterns, the last being detailed, programming-language-specific patterns as idioms (Microsoft 2012).
Within computer science, and particularly related to the more micro level problem of recognising patterns themselves within data sets automatically using computation, is an important and challenging area of research. The main forms of pattern recognition (we can think of these as patterns to find patterns) used in computation are usually enumerated as template-matching, prototype matching, feature analysis, recognition by components, fourier analysis, and lastly bottom-up and top-down processing. I'll briefly describe each of the six main approaches.
Template Matching: This is where a computational device uses a set of images (or templates) against which it can compare a data set, which might be an image for example (for examples of an image set, see Cole et al. 2004).
Template Matching (Jahangir 2008) |
Feature Analysis: In this approach a variety of approaches are combined including detection, pattern dissection, feature comparison, and recognition. Essentially the source data is broken into key features or patterns to be compared with a library of partial objects to be matched with (for examples, see Morgan n.d.).
Recognition by Components: In this approach objects are understood to be made up of what are called 'geons' or geometric primitives. A sample of data or images is then processed through feature detectors which are programmed to look for curves, edges, etc. or through a geo detector which looks for simple 2D or 3D forms such as cylinders, bricks, wedges, cones, circles, and rectangles (see Biederman 1987).
Fourier Analysis: This form of pattern matching uses algorithms to decompose something into smaller pieces which can then be selectively analysed. This decomposition process itself is called the Fourier transform. For example, an image might be broken down into a set of twenty squares across the image field, each of which being smaller, is made faster to process. As Moler (2004) argues, 'we all use Fourier analysis every day without even knowing it. Cell phones, disc drives, DVDs, and JPEGs all involve fast finite Fourier transforms'. Fourier transformation is also used to generate a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of the digital image. The Fourier components of each square are then rounded to lower arithmetic precision, and weak components are discarded, so that the remaining components can be stored in much less computer memory or storage space. To reconstruct the image, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image, this is why the image can produce 'blocky' or the distinctive digital artefacts in the rendered image, see JPEG (2012).
Bottom-up and Top-down Processing: Finally, in the Bottom-up and Top-down methods an interpretation emerges from the data, this is called data-driven or bottom-up processing. Here the interpretation of a data set to be determined mostly by information collected, not by your prior models or structures being fitted to the data, hence this approach looks for repeated patterns that emerge from the data. The idea is that starting with no knowledge the software is able to learn to draw generalisations from particular examples. Alternatively an approach where prior knowledge or structures are applied data is fitted into these models to see if there is a 'fit'. This approach is sometimes called schema-driven or top-down processing. A schema is a pattern formed earlier in a data set or drawn from previous information (Dewey 2011).
What should be apparent from this brief discussion of the principles of abduction and pattern-matching in computer science is their creative possibilities for generating results from data sets. The ability to generate hypothesises on the basis of data, which is fallible and probabilistic allows for computational devices to generate forecasts and predictions based on current and past behaviours, data collection, models, and images. It is this principle of abductive reason which makes computational reasoning different from instrumental reason, and particularly from the iron-cage of logical implication or programmatic outcome that instrumental reason suggests. Indeed Alexander that the most useful patterns are generative,
These patterns in our minds are, more or less, mental images of the patterns in the world: they are abstract representations of the very morphological rules which define the patterns in the world. However, in one respect they are very different. The patterns in the world merely exist. But the same patterns in our minds are dynamic. They have force. They are generative. They tell us what to do; they tell us how we shall, or may, generate them; and they tell us too, that under certain circumstances, we must create them. Each pattern is a rule which describes what you have to do to generate the entity which it defines. (Alexander 1979: 181-182)
Bibliography
Alexander, C. (1964) Notes on the Synthesis of Form, Harvard University Press.
Alexander, C., S. Ishikawa, & M. Silverstein (1977) A Pattern Language, Oxford: Oxford University Press.
Alexander, C. (1979) The Timeless Way of Building, Oxford: Oxford University Press.
Antonina, K., Barbro, B., Hannu, V., Jarmo, t. and Ari, V. (2003) Prototype-Matching System for Allocating Conference Papers, accessed 31/03/2012, http://www.hicss.hawaii.edu/HICSS36/HICSSpapers/DTDMI06.pdf
Appleton, B. (2000) Patterns and Software: Essential Concepts and Terminology, accessed 31/03/2012, http://www.cmcrossroads.com/bradapp/docs/patterns-intro.html
Bateson, G. (1979) Mind and Nature: A Necessary Unity, (Advances in Systems Theory, Complexity, and the Human Sciences). Hampton Press, accessed 31/03/2012, http://www.oikos.org/mind&nature.htm
Biederman, I. (1987) Recognition-by-Components: A Theory of Human Image Understanding, Psychological Review, 1987, Vol. 94, No. 2,115-147, accessed 31/03/2012, http://www.cim.mcgill.ca/~siddiqi/COMP-558-2012/rbc.pdf
Brown, W., Malveau, R., McCormick, H. and Mowbray, T. (1998) AntiPatterns, accessed 31/03/2012, http://www.antipatterns.com/
Cole, L, Austin, D., Cole, L. (2004) Visual Object Recognition using Template Matching, accessed 31/03/2012, http://www.araa.asn.au/acra/acra2004/papers/cole.pdf
Dewey, R. A. (2011) Top-down and Bottom-up Processing http://www.intropsych.com/ch07_cognition/top-down_and_bottom-up_processing.html
Dixon, D. (2012) Analysis Tool or Design Methodology? Is there an epistemological basis for patterns?, in Berry, D. M. (ed.) Understanding Digital Humanities, London: Palgrave.
Eldridge, M. (n.d.) Clarifying the Process of Abduction and Understanding “Inductive” Generalization, accessed 31/03/2012, http://www.philosophy.uncc.edu/mleldrid/SAAP/USC/TP26.html
Janhangir, N. (2008) Genetic Algorithm Driven Template Matching In ActionScript 3.0, accessed 31/03/2012, http://nadimissimple.wordpress.com/2008/12/11/genetic-algorithm-driven-template-matching/
JPEG (2012) JPEG Homepage, accessed 31/03/2012, http://www.jpeg.org/jpeg/index.html
Lea, D. (1977) Christopher Alexander: An Introduction for Object-Oriented Designers, accessed 31/03/2012, http://g.oswego.edu/dl/ca/ca/ca.html
Microsoft (2012) Organizing Patterns, accessed 01/04/2012, http://msdn.microsoft.com/en-us/library/ff647589.aspx
Moler, C. (2004) Numerical Computing with MATLAB, accessed 31/03/2012, http://www.mathworks.se/moler/chapters.html
Morgan, M. (n.d.) Feature Analysis, accessed 31/03/2012, http://www.staff.city.ac.uk/~morgan/FeatureAnalysis.pdf
Peirce, C. S. (1958) The Collected Works of Charles Sanders Peirce, Harvard University Press.
Peirce, C. S. (1988) Pragmatism as the Logic of Abduction, in The Essential Peirce: Selected Philosophical Writings, 1893—1913, Bloomington: Indiana University Press.
Rybczynski, W. (2009) Do You See a Pattern?, Slate, accessed 31/03/2012, http://www.slate.com/articles/arts/architecture/2009/12/do_you_see_a_pattern.html
Schurz, G. (2008) Patterns of Abduction, Synthese, 164 (2): 201-234.
Pick of the Pack
As the countdown of the last days of The Crazy Tree continues, more videos from behind the scenes of OTH. And OMG's pick of best on The Crazy Tree, has to be Austy.Austin talks more about directing and how much he loves it and you can see him in action. Check him out in this video
And pack to the pick... the toothpick.
And pack to the pick... the toothpick.
¿Correrán la misma suerte los e-books que los archivos de música mp3?
Ya he hablado aquí de lo que pienso sobre el problema de la piratería en la música, pero con la llegada de los libros electrónicos o e-books, hemos visto que podrían terminar igual que sucede con la piratería musical. Y la razón parece ser la misma: son fáciles de copiar, se pueden transmitir por la red y ya existe una serie de formatos por demás estándar, que hacen la vida más fácil a todos.
Amazon, por ejemplo, ha dicho que vende más libros electrónicos que en papel, y puede ser, gracias a la tecnología de las tabletas Kindle, originalmente hechas para leer documentos electrónicos y ahora para jugar, ver videos o llevar las calorías consumidas.
Para obtener un libro electrónico, el proceso es tan fácil como entrar al gran catálogo que tiene Amazon o Barnes & Noble, entre otras tiendas de libros gigantescas. Acceder al volumen que queremos, pagarlo con la tarjeta de crédito y, literalmente, tenerlo en nuestro dispositivo en un par de minutos.
Además de estas tablets para lectura, Amazon permite leer vía una aplicación para la PC. Con ello se incrementa el mercado potencial de lectores, aunque plantea una serie de problemas a los que distribuyen los contenidos, porque es claro que nadie quiere que les roben sus derechos legítimos de autores.
Si usted usa un Kindle u otra tableta dedicada a la lectura de libros electrónicos, probablemente no pueda copiar esos archivos para pasarlos a otras computadoras, propias o de terceros. Amazon, por ejemplo, ha tomado sus precauciones para que un libro electrónico no pase de mano en mano digitalmente, es decir para evitar la reproducción del archivo.
Sin embargo, no parece importar si hay libros protegidos contra copias,o si se hace difícil desbloquearlos para compartirlos con todos. Y no importa por qué hay en el mundo personas dispuestas a escanear si es necesario todo tipo de libros y subirlos a servidores de archivos como 4shared o depositfiles, por ejemplo.
Desde luego que hay muchos libros escaseados deficientemente, pero hay otros que no, que hasta parecen originales sacados de la edición electrónica. Para muchos basta con tener la información, aunque se encuentre en escaneos manchados o con páginas que aparecen torcidas, otros buscarán una edición mejor escaneada y. aunque tengan el libro en formato PDF, querrán comprarlo en papel. Tal vez esto es el recurso de algunos personajes que en la mejor tradición librera saben que no hay como un libro original, hojearlo, revisarlo e incluso olerlo, saborearlo con la vista, sentirlo físicamente.
Este problema de los libros electrónicos tiende a ser cada vez más común. Ya hay una buena cantidad de libros digitalizados en los más diversos temas, desde autoayuda hasta física cuántica, pasando por zootecnia. Con estos formatos electrónicos, compartir información es cada vez más fácil y hay cada vez más herramientas para un escaneo más limpio, como la que se muestra aquí o esta otra. Me queda claro que la piratería de obras digitales como los e-books se convertirá en un tópico a discutir.
Code, Foucault and Neoliberal Governmentality
For Foucault, Neoliberal governmentality is a particular form of post-welfare state politics in which the state essentially outsources the responsibility for ensuring the 'well-being' of the population. The primary recipient of this responsibility is derived from a strengthened notion of the subject, as a rational individual. Indeed, these new subjectivities are expected to 'look after themselves'. This form of governmentality has an extremely diffuse form of rule whereby strategies and imperatives of control are distributed through a variety of media but are implicated in even the most mundane practice of everyday life. As Schecter writes,
So for example, the state promotes an ethic of self-care which is justified in terms of a wider social responsibility and which is celebrated through the examples given in specific moments represented as individual acts of consumption that contribute to a notion of good citizenship. So using recycling bins, caring for one's teeth, stopping smoking, and so forth are all actively invested by the state as both detrimental to individual and collective care, but most importantly they are the responsibility of the citizen to abide by.
Neoliberal governmentality also gestures towards the subordination of state power to the requirements of the marketplace, the implication being that 'political problems' are re-presented or cast in market terms. Within this framework citizens are promised new levels of freedom, consumerism, customisation, interactivity and control over their life and possessions. In other words, they are promised an unfulfilled expectation as to the extent to which they are able to exert their individual agency.
In order to facilitate this governmental platform certain infrastructural systems need to be put in place, bureaucratic structures, computational agencies and so forth. For example, it has become increasingly clear that providing information to citizens is not sufficient for controlling and influencing behaviour. Indeed, people's ability to understand and manipulate raw data or information has been found to be profoundly limited in many contexts with a heavy reliance on habit understood as part of the human condition.
It is here that the notion of compactants (computational actants) allows us to understand the way in which computationality has increasingly become constitutive of the understanding of important categories in late capitalism, like privacy and self-care. Here we could say that we are interested in a transition from the juridicification, through the medicalisation, to the 'computationalisation' of reason. Hence, following Foucault, we are interested in studying the formation of discrete powers rather than power in general. That is, Foucault is interesting 'in the processes through which subjects become subjects, the truth becomes truth, and then changing conditions under which this happens, which in the first instance is the discrepancy between the visible and the readable' (Schecter 2010: 173). Or as Foucault himself writes:
Indeed the way in which compactants generate certain notion of truth and falsity is a topic requiring close investigation, both in terms of the surface interface generating a 'visible' truth, and the notion of a computational, or cloud, truth that is delivered from the truth-machines that lie somewhere on the networks of power and knowledge.
Bibliography
Schecter, D. (2010) The Critique of Instrumental Reason from Weber to Habermas, New York: Continuum.
Foucault regards the exercise of power and the formalisation of knowledge to be intimately bound up with the constitution of living individuals as subjects of knowledge, that is, as citizens and populations about whom knowledge is systematically constructed... Subjects are not born subjects so much as they become them. In the course of becoming subjects they are classified in innumerable ways which contribute to their social integration, even if they are simultaneously marginalised in many cases (Schecter 2010: 171).
So for example, the state promotes an ethic of self-care which is justified in terms of a wider social responsibility and which is celebrated through the examples given in specific moments represented as individual acts of consumption that contribute to a notion of good citizenship. So using recycling bins, caring for one's teeth, stopping smoking, and so forth are all actively invested by the state as both detrimental to individual and collective care, but most importantly they are the responsibility of the citizen to abide by.
Neoliberal governmentality also gestures towards the subordination of state power to the requirements of the marketplace, the implication being that 'political problems' are re-presented or cast in market terms. Within this framework citizens are promised new levels of freedom, consumerism, customisation, interactivity and control over their life and possessions. In other words, they are promised an unfulfilled expectation as to the extent to which they are able to exert their individual agency.
In order to facilitate this governmental platform certain infrastructural systems need to be put in place, bureaucratic structures, computational agencies and so forth. For example, it has become increasingly clear that providing information to citizens is not sufficient for controlling and influencing behaviour. Indeed, people's ability to understand and manipulate raw data or information has been found to be profoundly limited in many contexts with a heavy reliance on habit understood as part of the human condition.
It is here that the notion of compactants (computational actants) allows us to understand the way in which computationality has increasingly become constitutive of the understanding of important categories in late capitalism, like privacy and self-care. Here we could say that we are interested in a transition from the juridicification, through the medicalisation, to the 'computationalisation' of reason. Hence, following Foucault, we are interested in studying the formation of discrete powers rather than power in general. That is, Foucault is interesting 'in the processes through which subjects become subjects, the truth becomes truth, and then changing conditions under which this happens, which in the first instance is the discrepancy between the visible and the readable' (Schecter 2010: 173). Or as Foucault himself writes:
What is at stake in all this research about madness, illness, delinquency, and sexuality, as well as everything else I have been talking about today, is to show how the coupling of a series of practices with a truth regime forms an operative knowledge-power system (dispotif) which effectively inscribes in the real something that does not exist, and which subjects the real to a series of criteria stipulating what is true and what is false, whereby these criteria are taken to be legitimate. It is that moment which does not exist as real and which is not generally considered relevant to the legitimacy of a regime of true and false, it is that moment in things that engages me at the moment. It marks the birth of the asymmetrical bi-polarity of politics and economics, that is, of that politics and economics which are neither things that exist nor are errors, illusions or ideologies. It has to do with something which does not exist and which is nonetheless inscribed within the real, and which has great relevance for a truth regime which makes distinctions between truth and falsity (Foucault, The Birth of Bio-Politics, quoted in Schecter 2010: 173).
Indeed the way in which compactants generate certain notion of truth and falsity is a topic requiring close investigation, both in terms of the surface interface generating a 'visible' truth, and the notion of a computational, or cloud, truth that is delivered from the truth-machines that lie somewhere on the networks of power and knowledge.
Foucault suggests that if there is a 'system' or ensemble of systems, the task is somehow to think systemic functioning outside of the the perspective of the subject dominated by or in charge of the so-called system. Critical thinking can deconstruct the visible harmony between casual seeing and instrumental reason... in contrast with monolithic appearances, surfaces are characterised by strata and folds that can inflect power to create new truths, desires and forms of experience (Schecter 2010: 175).Here we can make the link between sight and power, and of course sight itself is deployed such that the 'visible' is not transparent nor hidden. Compactants certainly contribute to the deployment of the visible, through the generation of certain forms of geometric and photographic truths manifested in painted screens and surfaces.
Bibliography
Schecter, D. (2010) The Critique of Instrumental Reason from Weber to Habermas, New York: Continuum.
Battle of Zama AAR #1 (Impetus rules)
The Battle of Zama (if you read the rather excellent history of the battle here) was fought between Carthage and Rome and two famous generals (Hannibal and Scipio). Scipio had studied Carthage's tactics and victories and developed his own to counteract Hannibal's (such as having more cavalry on the field to prevent another Cannae and creating lanes in his legions to allow the Carthaginian elephants to pass by with the minimum of fuss. Oh...and used trumpets and caltrops to get them to stampede back into their own lines.
So before Thursday, Russ came round for a trial run. As you know, we've had some run-ins with the Impetus rules but Tuesday saw the scales fall from my eyes (and not just because I won convincingly!). A lot of the rules started to make sense (especially the 5cm zone of control) and I finally learned how to kill elephants (albeit these weren't the Indian shooty variety).
I had the Romans, Russ the Carthaginians. We found after that the Cartho's were about 50 points underpowered (a result of Russ not being able to add up) and he beefed them up for the Thursday's battle (see the forthcoming Battle of Zama AAR #2).
Set up was fairly simple. The Roman legions were laid out in blocks of 4 (velites, hastati, principes and triarii) with the Roman cavalry on the left and the Numidian allies on the right. Russ forgot to give me the Spanish allies and a base of Numidians so we had to squeeze them in any old how. Scipio was positioned with the middle base of triarii at the back.
The Carthaginians had cavalry on both flanks (both outnumbered by the Romans). Out front were the elephants, then a line of skirmishers, then the Gauls and Spanish scutarii and the African Spears at the back (with Hannibal).
We set both Scipio and Hannibal at the same level (getting +3 on the dice) as Scipio (as proved in the battle) was every bit as cunning as his opposite number.
Russ got the ball rolling by sending out his elephants (half of which promptly got disordered) and I lined up my velites and scutari to have a bash at them. The whole Roman army moved up as one but I didn't want to risk any disorders with elephants in charge range so the velites only moved once while the rest moved twice (and disordered units sprouted up everywhere).
On the left flank the Romans were keen to get stuck in and after the Carthos had moved, I decided to charge them in. I probably didn't need to do this but there was an elephant close by and I wanted to avoid its attentions. The Carthos evaded but I contacted a unit of skirmishers that were left in the way and caused them much grief and bloodshed.
My velites then got in on the act and lined up some elephants (notably the disordered ones) - as did the allied scutari. This proved to be highly effective. The elephants were unsupported and got peppered by javelins all down the line - forcing them backwards and inflicting casualties. This meant they weren't fresh and so lost impetus. Happy days!
This meant that quite quickly the elephants were being killed off - meaning that the first line of defence was already crumbling.
I was also aided by some lethal dice throwing - 3 hits from 4 dice and 4 from 5! For Impetus, that is quite impressive (as you only hit on a 6 or two 5's).
With two elephants gone and two disordered and surrounded by javelin troops it was already looking bad for Carthage.
On the left the cavalry converged (with Russ charging my disordered cavalry). However, he was going in with 2 units against 3 and this would eventually tell against him (although their light cavalry put up a hell of a fight).
On the right, I could only line up two of my three cavalry (as we were playing lengthways to ensure combat ensued!) against one of Russ' (as the skirmish line was in the way). Russ also had his cavalry on opportunity which deterred me from charging in.
On the left, Roman numbers told and Russ lost another unit (light horse) so he was down by 4 units to 0.
But he got his revenge - he charged an elephant as some velites and I got overconfident. I decided to stand and chuck javelins (as the elephant has no impetus against skirmishers). They didn't do too much damage and - once in melee - the velites quickly popped off. I had to line up some triarii to take on the grey monster instead.
But he was the last of the pachyderms left alive. The rest of the velites and scutarii - having completed their first mission - then went for the second: kill off the remaining enemy skirmishers.
Again my dice rolling in combat was awesome (5 hits from 7 dice) which was countered by Russ rolling lots of 1's.
At this stage I was in very good shape. I was winning on the left, in the centre and had 2:1 on the right (albeit against better cavalry as Russ had charged in). The numerical advantage didn't last long as another Cartho cavalry unit managed to contact the melee and now the Numidians were up against it. To make sure, Russ angled his scutari at my horses as well (running in behind the skirmishers). But exposing a flank in the process.
The Gauls had also now come within 30cm of the left side cavalry and - true to nature - bombed out with no thought to tactics or safety to get to grips with their horsey opponents. I took this opportunity to kill off the remaining elephant with my triarii be encouraging it to charge me when I went disordered.
Back on the right, a combination of cavalry, skirmishers and scutarii managed to kill off two of the Numidians. We made an error here (the third should have routed automatically). The remaining horse managed to push back a skirmisher unit, creating a mess of the Cartho left wing.
This was exacerbated by the other unit of Gauls pushing towards their left as well - one light horse unit had become a magnet for 1/3 of the Cartho army! This means flanks were exposed and proved a tempting target for me.
The triarii meanwhile had despatched the elephant and a unit of principes came up to support them in a dash up the left flank. I also moved a unit of triarii out on the right to protect it from the cavalry. But Russ' top Spanish cavalry was down to 1 VBU and so virtually spent.
I therefore rushed my velites forward to javelin the flanks of his scutarii. They took some damage as the rest of the velites rushed up as well. This led to a classic collapse down the line as unit after unit collapsed (or were pushed back by the Gauls and scutarii pushing through their own skirmishers).
Hundreds of javelins poured into the Gauls and Scutari as they were out of position. Russ had to re-alingn his cavalry to scare off the velites as a result (and concious of the encroaching triarii).
Time caught up with us at this point, so we had to call it.
The Romans had lost the right wing cavalry (3 bases) plus a unit of velites.
The Carthaginians had lost the left wing cavalry (2 bases) plus skirmishers (3 bases) and elephants (4 bases). The remaining Spanish cavalry were spent.
Given that the Roman skirmishers were in good order and the legions were virtually unscathed we gave it as a major Roman victory. Huzzah!
Verdict
I've struggled with Impetus for the past few weeks, trying to get over the elements of the rules that seemed unfair / arbitrary etc. Tuesday though was an epiphany - I could see where I'd been making mistakes and how I needed to conserve troops (especially keeping them fresh and not disordered) in order to get the best from them. It wasn't because I won, it was because I finally 'got' the rules and started to use them to my advantage.
What I especially learned was :-
* keep units fresh
* keep away from skirmishers unless using skirmishers
* try to avoid going disordered
* use opportunity if you're not going to move
* don't expose flanks
* don't send units out on their own
So before Thursday, Russ came round for a trial run. As you know, we've had some run-ins with the Impetus rules but Tuesday saw the scales fall from my eyes (and not just because I won convincingly!). A lot of the rules started to make sense (especially the 5cm zone of control) and I finally learned how to kill elephants (albeit these weren't the Indian shooty variety).
I had the Romans, Russ the Carthaginians. We found after that the Cartho's were about 50 points underpowered (a result of Russ not being able to add up) and he beefed them up for the Thursday's battle (see the forthcoming Battle of Zama AAR #2).
Set up was fairly simple. The Roman legions were laid out in blocks of 4 (velites, hastati, principes and triarii) with the Roman cavalry on the left and the Numidian allies on the right. Russ forgot to give me the Spanish allies and a base of Numidians so we had to squeeze them in any old how. Scipio was positioned with the middle base of triarii at the back.
The Carthaginians had cavalry on both flanks (both outnumbered by the Romans). Out front were the elephants, then a line of skirmishers, then the Gauls and Spanish scutarii and the African Spears at the back (with Hannibal).
We set both Scipio and Hannibal at the same level (getting +3 on the dice) as Scipio (as proved in the battle) was every bit as cunning as his opposite number.
Russ got the ball rolling by sending out his elephants (half of which promptly got disordered) and I lined up my velites and scutari to have a bash at them. The whole Roman army moved up as one but I didn't want to risk any disorders with elephants in charge range so the velites only moved once while the rest moved twice (and disordered units sprouted up everywhere).
The Carthaginians move - and go disordered |
On the left flank the Romans were keen to get stuck in and after the Carthos had moved, I decided to charge them in. I probably didn't need to do this but there was an elephant close by and I wanted to avoid its attentions. The Carthos evaded but I contacted a unit of skirmishers that were left in the way and caused them much grief and bloodshed.
Velites vs Elephant. Kill, kill, kill,,, |
This meant that quite quickly the elephants were being killed off - meaning that the first line of defence was already crumbling.
Untypical dice throwing from me |
I was also aided by some lethal dice throwing - 3 hits from 4 dice and 4 from 5! For Impetus, that is quite impressive (as you only hit on a 6 or two 5's).
Elephants start dying. Quickly. |
On the left the cavalry converged (with Russ charging my disordered cavalry). However, he was going in with 2 units against 3 and this would eventually tell against him (although their light cavalry put up a hell of a fight).
Cavalry on the left. Those skirmishers won't last long either. |
On the right, I could only line up two of my three cavalry (as we were playing lengthways to ensure combat ensued!) against one of Russ' (as the skirmish line was in the way). Russ also had his cavalry on opportunity which deterred me from charging in.
On the left, Roman numbers told and Russ lost another unit (light horse) so he was down by 4 units to 0.
The left - lots of deaths. |
But he got his revenge - he charged an elephant as some velites and I got overconfident. I decided to stand and chuck javelins (as the elephant has no impetus against skirmishers). They didn't do too much damage and - once in melee - the velites quickly popped off. I had to line up some triarii to take on the grey monster instead.
But he was the last of the pachyderms left alive. The rest of the velites and scutarii - having completed their first mission - then went for the second: kill off the remaining enemy skirmishers.
Again my dice rolling in combat was awesome (5 hits from 7 dice) which was countered by Russ rolling lots of 1's.
At this stage I was in very good shape. I was winning on the left, in the centre and had 2:1 on the right (albeit against better cavalry as Russ had charged in). The numerical advantage didn't last long as another Cartho cavalry unit managed to contact the melee and now the Numidians were up against it. To make sure, Russ angled his scutari at my horses as well (running in behind the skirmishers). But exposing a flank in the process.
Everyone wants to attack my horses |
The Gauls had also now come within 30cm of the left side cavalry and - true to nature - bombed out with no thought to tactics or safety to get to grips with their horsey opponents. I took this opportunity to kill off the remaining elephant with my triarii be encouraging it to charge me when I went disordered.
Back on the right, a combination of cavalry, skirmishers and scutarii managed to kill off two of the Numidians. We made an error here (the third should have routed automatically). The remaining horse managed to push back a skirmisher unit, creating a mess of the Cartho left wing.
This was exacerbated by the other unit of Gauls pushing towards their left as well - one light horse unit had become a magnet for 1/3 of the Cartho army! This means flanks were exposed and proved a tempting target for me.
The triarii meanwhile had despatched the elephant and a unit of principes came up to support them in a dash up the left flank. I also moved a unit of triarii out on the right to protect it from the cavalry. But Russ' top Spanish cavalry was down to 1 VBU and so virtually spent.
I therefore rushed my velites forward to javelin the flanks of his scutarii. They took some damage as the rest of the velites rushed up as well. This led to a classic collapse down the line as unit after unit collapsed (or were pushed back by the Gauls and scutarii pushing through their own skirmishers).
Hundreds of javelins poured into the Gauls and Scutari as they were out of position. Russ had to re-alingn his cavalry to scare off the velites as a result (and concious of the encroaching triarii).
Time caught up with us at this point, so we had to call it.
The Romans had lost the right wing cavalry (3 bases) plus a unit of velites.
The Carthaginians had lost the left wing cavalry (2 bases) plus skirmishers (3 bases) and elephants (4 bases). The remaining Spanish cavalry were spent.
Given that the Roman skirmishers were in good order and the legions were virtually unscathed we gave it as a major Roman victory. Huzzah!
Verdict
I've struggled with Impetus for the past few weeks, trying to get over the elements of the rules that seemed unfair / arbitrary etc. Tuesday though was an epiphany - I could see where I'd been making mistakes and how I needed to conserve troops (especially keeping them fresh and not disordered) in order to get the best from them. It wasn't because I won, it was because I finally 'got' the rules and started to use them to my advantage.
What I especially learned was :-
* keep units fresh
* keep away from skirmishers unless using skirmishers
* try to avoid going disordered
* use opportunity if you're not going to move
* don't expose flanks
* don't send units out on their own
Breaking News
Your Attention Please
Get out your shaker and slap on some Sex Panther.
Mr. Nichols you got some drinks to make.
Ron Burgundy is back!
Tonight is a night for The Ron Burgundy. Remember this drink?? - Aus10 August 27, 2010
I just invented a new drink. The ron burgundy. I may release the recipe. - Aus10 August 7, 2010
The ron burgundy is titos vodka. Sprite. And cran-pom!! It is GLORIOUS!!!!! - Aus10 August 8, 2010
Burgundy made the announcement on Conan two nights ago. And it's confirmed that Action 4 News team will be back too.
Tough break, that there's no open spots at Action 4, unless they need a Aqua Man for the beach and boating report. But they still may need some new members of the other rival anchor news teams after the last street fight.
And a funky merman like you could take on a triton and rock the hat.
Stay Class Mr. Nichols.
Happy Austin Friday
And good luck to everyone buying a ticket for Mega Millions!
Get out your shaker and slap on some Sex Panther.
Mr. Nichols you got some drinks to make.
Ron Burgundy is back!
Tonight is a night for The Ron Burgundy. Remember this drink?? - Aus10 August 27, 2010
I just invented a new drink. The ron burgundy. I may release the recipe. - Aus10 August 7, 2010
The ron burgundy is titos vodka. Sprite. And cran-pom!! It is GLORIOUS!!!!! - Aus10 August 8, 2010
Burgundy made the announcement on Conan two nights ago. And it's confirmed that Action 4 News team will be back too.
Tough break, that there's no open spots at Action 4, unless they need a Aqua Man for the beach and boating report. But they still may need some new members of the other rival anchor news teams after the last street fight.
And a funky merman like you could take on a triton and rock the hat.
Stay Class Mr. Nichols.
Happy Austin Friday
And good luck to everyone buying a ticket for Mega Millions!
Sam Harris on Free Will
The religious instinct is not merely limited to belief in God and supernatural agents. And to varying degrees, even hard-core atheists tend to be religious in this sense, since they still adopt beliefs that may be religious in origin. It's a little too convenient that when one denies the existence of God, most other beliefs are not similarly rejected, but why should this be the case?
If we reject God, we can't simply assume the reality of the continued identity of the self (or even its very existence), an objective basis for morality, a rational basis for science, the existence of free will, the reality of the external world, the very idea of objective truth, etc. We need to mount arguments and evidence in support of these ideas if we want to be able to have a right to such beliefs.
And Sam Harris thinks we're lying to ourselves if we believe that our wills are free. His arguments are not particularly interesting or new here (and to many not even convincing). Harris may have just written a concise little book on the subject, but he's no Nietzsche, who clinched the case against free will and the self even more concisely, in less than a paragraph:
While I agree with a good number of points made by Harris, there is at least one fundamental point on which he seems to be utterly confused: his denial of free will cannot be a scientific conclusion when he argues that there is no possible world in which free will could, even in principle, exist. If this is not a testable claim that could be decided by empirical evidence but simply by conceptual analysis (as I would be perfectly happy to do), then this is a philosophical conclusion... and people say philosophy doesn't make progress :)
If we reject God, we can't simply assume the reality of the continued identity of the self (or even its very existence), an objective basis for morality, a rational basis for science, the existence of free will, the reality of the external world, the very idea of objective truth, etc. We need to mount arguments and evidence in support of these ideas if we want to be able to have a right to such beliefs.
And Sam Harris thinks we're lying to ourselves if we believe that our wills are free. His arguments are not particularly interesting or new here (and to many not even convincing). Harris may have just written a concise little book on the subject, but he's no Nietzsche, who clinched the case against free will and the self even more concisely, in less than a paragraph:
A thought comes when ‘it’ wishes, and not when ‘I’ wish, so that it is a falsification of the facts of the case to say the subject ‘I’ is the condition of the predicate ‘think’. It thinks: but that this ‘it’ is precisely the famous old ‘ego’ is, to put it mildly, only a superstition, an assertion, and assuredly not an ‘immediate certainty’. . . . Even the ‘it’ contains an interpretation of the process, and does not belong to the process itself. One infers here according to the grammatical habit: ‘thinking is an activity; every activity requires an agent; consequently —’.But where Harris is interesting (and I've subscribed to this line of thinking for at least a decade now) is in what he has to say about the implications of the denial of free will: it doesn't de-humanize us. This recognition humanizes us because it helps us to understand that instead of jumping to conclusions and throwing blame around, as we're wont to do, maybe we need to be more compassionate and understand that people are not fully free, and that their actions are at least partly to blame on circumstances and other causal antecedents...
While I agree with a good number of points made by Harris, there is at least one fundamental point on which he seems to be utterly confused: his denial of free will cannot be a scientific conclusion when he argues that there is no possible world in which free will could, even in principle, exist. If this is not a testable claim that could be decided by empirical evidence but simply by conceptual analysis (as I would be perfectly happy to do), then this is a philosophical conclusion... and people say philosophy doesn't make progress :)
Faster Than 50 Million Laptops
The Cray Jaguar supercomputer can perform more than a million billion operations per second. It takes up more than 5,000 square feet at Oak Ridge National Laboratory in the United States. In 2009 it became the fastest computer in the world.
Faster Than 50 Million Laptops -- The Race To Go Exascale -- CNN
(CNN) -- A new era in computing that will see machines perform at least 1,000 times faster than today's most powerful supercomputers is almost upon us.
By the end of the decade, exaFLOP computers are predicted to go online heralding a new chapter in scientific discovery.
The United States, China, Japan, the European Union and Russia are all investing millions of dollars in supercomputer research. In February, the EU announced it was doubling investment in research to €1.2 billion ($1.6 billion).
Read more ....
My Comment: Now that is fast.
My new obsession: LEGO mecha
It started with this, the Mobile Frame Zero kickstarter page. I need a new board game, or a new hobby, like a hole in the head. But I'm absolutely enamored with these little LEGO robots. Some builders are doing incredible work, using blocks in ways that were never intended, and I'm deep in the rabbit hole of photo galleries. I mean just look at these:
What.
WHAT!
WHAAAAAAAAAAT?!
What.
WHAT!
WHAAAAAAAAAAT?!
Enough to make you Quizzy
Jake was seen popping up all of LA:
Grabbing Groceries
Filling Up the Tank
And yesterday out for a drag... (not the SNL kind)
Pictures removed.
Every picture tells a story but with Jake it more like a quiz show.
Why is Jake so covered up while shopping for groceries?
Did he not want to be seen running into buy what he had to pick up?
Why would paps take pictures of him IN a grocery store, what is the big deal to that?
And while we all are impatient in the grocery line, Jake seems a little more anxious that your every day shopper, just from the way his eyes dart around.
And lastly - why are there NO pictures of him outside in the parking lot? A public space.
And for cig break? Why now? When he has a history of being discreet when he did smoke all those years ago?
And why are those pictures cropped so close?
Pictures Removed
Could all of this actually be the Evil Twin tour? That Yankees cap should be a clue, why else would you wear it when you call yourself a Red Sox fan?
Evil Twin... Jeke. Getting in touch with his twin before filming begins?
"Seriously....coupons and now you going to count out the exact change?"
Tonight Jake will be honored along with others at the The Advocate's 45th anniversary celebration. He and Heath will be honor for their work in the telling the epic romance of Brokeback Mountain. I
One word, Mr. G. You don't have to look like you've come from the mountain to receive the honor. Here's hoping the face fur will get thinned and trimmed so we can see more of that beautiful smile.
Grabbing Groceries
Filling Up the Tank
And yesterday out for a drag... (not the SNL kind)
Pictures removed.
Every picture tells a story but with Jake it more like a quiz show.
Why is Jake so covered up while shopping for groceries?
Did he not want to be seen running into buy what he had to pick up?
Why would paps take pictures of him IN a grocery store, what is the big deal to that?
And while we all are impatient in the grocery line, Jake seems a little more anxious that your every day shopper, just from the way his eyes dart around.
And lastly - why are there NO pictures of him outside in the parking lot? A public space.
And for cig break? Why now? When he has a history of being discreet when he did smoke all those years ago?
And why are those pictures cropped so close?
Pictures Removed
Could all of this actually be the Evil Twin tour? That Yankees cap should be a clue, why else would you wear it when you call yourself a Red Sox fan?
Evil Twin... Jeke. Getting in touch with his twin before filming begins?
"Seriously....coupons and now you going to count out the exact change?"
Tonight Jake will be honored along with others at the The Advocate's 45th anniversary celebration. He and Heath will be honor for their work in the telling the epic romance of Brokeback Mountain. I
One word, Mr. G. You don't have to look like you've come from the mountain to receive the honor. Here's hoping the face fur will get thinned and trimmed so we can see more of that beautiful smile.
Subscribe to:
Posts (Atom)