Ed Blakey (ed.blakey@queens.oxon.org)
School of Mathematics
University of Bristol
University Walk
Bristol BS8 1TW
United Kingdom
   Home     Publications     Curriculum Vitae (  

Ed Blakey’s publications

Enable JavaScript in your browser (here’s how) to see this page at its best.

Key to left-hand column:    *Invited paper/talk (8)  Peer-reviewed journal paper (7)   Peer-reviewed conference paper (6)   Other conference/seminar presentation (8)   Research report (2)   Academic dissertation (3)   Patent (1)   Miscellaneous (1) 

In preparation/press: Non-Quantum Implementations of Shor’s Algorithm  [Journal paper]

Abstract. To follow.

In preparation/press: Foreword: Information Security as a Resource  [Journal foreword]

In Information and Computation, Elsevier, ISSN 0890-5401 (to appear)

This foreword is to appear in the special Information Security as a Resource issue of Information and Computation. It was written by Blakey on behalf of the issue’s guest editors, E. Blakey, B. Coecke, M. Mislove and D. Pavlovic.

In preparation/press: Complexity-Style Resources in Cryptography  [Journal paper]

In Information and Computation, Elsevier, ISSN 0890-5401 (to appear)

This peer-reviewed journal paper is to appear in Information and Computation. This research is supported in part by the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models, and in part by the Leverhulme Trust.

Abstract. Previously, the author has developed a framework within which to quantify and compare the resources consumed during computational—especially unconventional computational—processes (adding to the familiar resources of run-time and memory space such non-standard resources as precision and energy); it is natural and beneficial in this framework to employ various complexity-theoretic tools and techniques. Here, we seek an analogous treatment not of computational processes but of cryptographic protocols and similar, so as to be able to apply the existing arsenal of complexity-theoretic methods in new ways, in the derivation and verification of protocols in a wider, cryptographic context. Accordingly, we advocate a framework in which one may view as resources the costs—which may be related to computation, communication, information (including side-channel information) or availability of primitives, for example—incurred when executing cryptographic protocols, coin-tossing schemes, etc. The ultimate aim is to formulate as a resource, and be able to analyse complexity-theoretically, the security of these protocols and schemes.

Keywords: computational complexity, cryptography, factorization, resource, security, unconventional computation

* *
Generalizing Complexity Theory (2012)  [Conference slides]* [Conference slides]*

This conference presentation was given, by invitation, at Physics and Computation 2012, the visit being funded by the EPSRC; the slides used in the presentation are available here. A shorter version of the presentation was given, again by invitation, at the 64th Theorietag—Workshop on Algorithms and Complexity; the slides used in the presentation are available here. This research is supported in part by the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models, in part by the Leverhulme Trust, and in part by the European Commission.

Abstract. Citing the example of an analogue device that factorizes natural numbers in polynomial time, we argue that certain unconventional computers demand complexity analyses differing in approach from those of Turing machines and similar, conventional systems. Accordingly, we describe a computational-model-independent framework of complexity theory that allows analysis in the conventional and unconventional realms alike. We illustrate the framework’s use via analysis of certain optical systems, thus allaying a controversy connected therewith. Finally, we mention a possible direction in which this work may be extended.

Cellular Automata get their Wires Crossed (2012)  [Conference paper] [Conference slides] [Animation 1] [2]

In NCMA 2012, R. Freund, M. Holzer, B. Truthe, U. Ultes-Nitsche (editors), Austrian Computer Society, ISBN 978-3-85403-290-8

This peer-reviewed conference paper was presented at Non-Classical Models of Automata and Applications 2012, in the proceedings of which it appears. The slides used in the presentation are available here, and the animations here: 1 2. This research is supported in part by the Leverhulme Trust.

Abstract. In three spatial dimensions, communication channels are free to pass over or under each other so as to cross without intersecting; in two dimensions, assuming channels of strictly positive thickness, this is not the case. It is natural, then, to ask whether one can, in a suitable, two-dimensional model, cross two channels in such a way that each successfully conveys its data, in particular without the channels interfering at the intersection. We formalize this question by modelling channels as cellular automata, and answer it affirmatively by exhibiting systems whereby channels are crossed without compromising capacity. We consider the efficiency (in various senses) of these systems, and mention potential applications.

Keywords: cellular automaton, chip design, communication channel, junction, road network, wire crossing.

Ray Tracing—Computing the Uncomputable? (2012)  [Conference paper] [Conference slides]

In Electronic Proceedings in Theoretical Computer Science, ISSN 2075-2180 (to appear)

This peer-reviewed conference paper was presented at Developments in Computational Models 2012, in the proceedings of which it is to appear. The slides used in the presentation are available here. This research is supported in part by the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models, and in part by the Leverhulme Trust.

Abstract. We recall from previous work a model-independent framework of computational complexity theory. Notably for the present paper, the framework allows formalization of the issues of precision that present themselves when one considers physical, error-prone (especially analogue rather than digital) computational systems. We take as a case study the ray-tracing problem, a Turing-machine-uncomputable problem that can, in apparent violation of the Church-Turing thesis, nonetheless be said to be solved by certain optical computers; however, we apply the framework of complexity theory so as to formalize the intuition that the purported super-Turing power of these computers in fact vanishes once precision is properly considered.

Keywords: Church-Turing thesis, complexity, computability, optical computation, precision, ray tracing, unconventional computation

*
Too Good to be True? Precisely! The Real Cost of Non-Standard Computation (2012)  [Seminar slides]*

This presentation was given, by invitation, to the Quantum Information and Foundations group at Cambridge University, as part of their seminar series. The work forms part of the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models.

Abstract. Certain (unconventional, non-Turing) computers give the illusion of efficiency, whilst being fundamentally impracticable. The illusion stems from the computers’ polynomial time and space complexity, the impracticability from their exponential complexity with respect to computational resources other than time and space (precision, for example, is such a resource). From previous research, we recap a paradigm- and resource-independent framework of computational complexity that dispels this illusion. From immanent research, we mention an application of the framework to Shor’s algorithm (the question here is ‘what is the precision complexity of Shor’s algorithm?’). And from proposed research, we describe an extension of the framework from complexity-theoretic to cryptographic concerns.

*
Turing and Non-Turing Computers: A Tale of Two Complexities (2011)  [Conference slides]*

This conference presentation was given, by invitation, at Computing 2011; the visit was funded in part by the Karlsruhe House of Young Scientists. The slides used in the presentation are available here. The work forms part of the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models.

Abstract. The scope of Computing 2011 includes two topics—different models of computing and complexity—that we combine in the present talk. Specifically, we demonstrate that unconventional computers (that is, systems—including quantum, analogue, DNA/chemical and optical computers—that do not adhere to models polynomially equivalent to the Turing machine) demand computational complexity analysis that differs from that of Turing machines, random-access machines and equivalent. Indeed, simply following the Turing-style pattern of complexity analysis can lead one to conclude that, for example, certain factorization systems enjoy polynomial complexity; this is the case not only with computers such as Shor’s theoretically sound but (to date) practically unimplementable quantum system, but also with an analogue computer (which we describe in the talk) that in fact suffers from exponential complexity (which fact becomes evident once a suitable, model-independent notion of complexity is adopted). We present in this talk just such a model-independent framework for computational complexity theory.

Resources in Cryptography (2011)  [Conference slides]

This conference presentation was given at Information Security as a Resource. The slides used in the presentation are available here. The work forms part of the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models.

Abstract. In previous work, the author has developed a framework within which to deal with the resources consumed during computational processes (adding to the familiar resources of run-time and memory space such non-standard resources as precision and energy); this framework provides various complexity-theoretic tools and techniques. Here, we seek an analogous treatment not of computational processes but of cryptographic protocols and similar, so as to be able to apply the existing complexity-theoretic tools and techniques in the derivation and verification of protocols in a wider information-theoretic context. Accordingly, we advocate a framework in which one may reason about the costs—which may be related to computation, communication, information (including side-channel information), availability of primitives, etc.—incurred when executing cryptographic protocols, coin-tossing schemes, etc.

*
Counting the Cost of Quantum Computation and Cryptography (2011)  [Seminar slides]*

This presentation was given, by invitation, to the Quantum Computation and Information group at Bristol University, as part of their seminar series. The slides used in the presentation are available here. The work forms part of the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models.

Abstract. In previous work, we have shown that successful complexity analysis of unconventional computers requires an approach different from that employed in the standard, Turing-machine case; notably for the paradigm’s timeliness and importance, this is true of the quantum computer. We recap from previous research a model-independent framework of complexity theory (which, of course, accommodates quantum computation), mention from immanent research an application of the framework to Shor’s algorithm, and describe from proposed research an extension of the framework from complexity-theoretic to cryptographic concerns.

A New Gap Theorem: the Gap Theorem’s Robustness against Dominance (2011)  [Journal paper] [Conference slides]

In the International Journal of Unconventional Computing, vol. 7, no. 3, A. Adamatzky (editor-in-chief), Old City Publishing, ISSN 1548-7199 (print)/1548-7202 (online)

This peer-reviewed journal paper appears in the International Journal of Unconventional Computing. The research was presented at Science and Philosophy of Unconventional Computing; the slides used in the presentation are available here. The work forms part of the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models.

Abstract. Thanks to Trachtenbrot and Borodin, computational complexity theorists have the Gap Theorem, which guarantees the existence of arbitrarily large gaps in the complexity hierarchy. That is, there can be made arbitrarily large increases in resource availability that, for all their vastness, yield no additional computational power: every computation that can be performed given the extra resource could have been performed in its absence.

In response to needs arising during the complexity analysis of unconventional—quantum, analogue, chemical, optical, etc.—computers, the notion of dominance was introduced (we recap in the present paper the relevant definitions). With dominance comes a corresponding hierarchy of complexity classes, and it is natural to ask whether a gap theorem, analogous to that of Trachtenbrot and Borodin, holds in this context.

We show that a gap theorem does indeed hold for certain dominance-based classes. We consider related statements concerning other dominance-based classes, and formulate corresponding conjectures.

Keywords: Gap Theorem, Dominance, Computational Complexity, Computability, Comparison of Complexity, Unconventional Computing.

A Model-Independent Theory of Computational Complexity: From Patience to Precision and Beyond (2010)  [Doctoral thesis] [Maths Genealogy entry]

Doctoral dissertation

This doctoral dissertation was submitted for the degree of Doctor of Philosophy in Computer Science (Oxford, 2006 – 2011); the author passed the viva-voce examination of this thesis on 24.i.2011. The work was supervised by Bob Coecke and Joël Ouaknine (which, therefore, places the author here in the Maths Genealogy Project), and funded in part by the EPSRC grant Complexity and Decidability in Unconventional Computational Models.

Abstract. The field of computational complexity theory—which chiefly aims to quantify the difficulty encountered when performing calculations—is, in the case of conventional computers, correctly practised and well understood (some important and fundamental open questions notwithstanding); however, such understanding is, we argue, lacking when unconventional paradigms are considered. As an illustration, we present here an analogue computer that performs the task of natural-number factorization using only polynomial time and space; the system’s true, exponential complexity, which arises from requirements concerning precision, is overlooked by a traditional, ‘time-and-space’ approach to complexity theory. Hence, we formulate the thesis that unconventional computers warrant unconventional complexity analysis; the crucial omission from traditional analysis, we suggest, is consideration of relevant resources, these being not only time and space, but also precision, energy, etc.

In the presence of this multitude of resources, however, the task of comparing computers’ efficiency (formerly a case merely of comparing time complexity) becomes difficult. We resolve this by introducing a notion of overall complexity, though this transpires to be incompatible with an unrestricted formulation of resource; accordingly, we define normality of resource, and stipulate that considered resources be normal, so as to rectify certain undesirable complexity behaviour. Our concept of overall complexity induces corresponding complexity classes, and we prove theorems concerning, for example, the inclusions therebetween.

Our notions of resource, overall complexity, normality, etc. form a model-independent framework of computational complexity theory, which allows: insightful complexity analysis of unconventional computers; comparison of large, model-heterogeneous sets of computers, and correspondingly improved bounds upon the complexity of problems; assessment of novel, unconventional systems against existing, Turing-machine benchmarks; increased confidence in the difficulty of problems; etc. We apply notions of the framework to existing disputes in the literature, and consider in the context of the framework various fundamental questions concerning the nature of computation.

* *
Apples & Oranges? Comparing Unconventional Computers (2010)  [Journal paper]* [Conference paper]* [Conference slides]

In the International Journal of Computers, vol. 4, no. 4, ISSN 1998-4308 (invited paper)
Earlier version in New Aspects of Systems Theory & Scientific Computation, N. Mastorakis, V. Mladenov, Z. Bojkovic (editors), ISBN 978-960-474-218-9 (invited paper)

This peer-reviewed, invited journal paper appears in the International Journal of Computers. It was presented, also by invitation, at the Tenth International Conference on Systems Theory and Scientific Computation, in the peer-reviewed proceedings of which an earlier version of the paper appears. The slides used in the presentation are available here. The work forms part of the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models.

Abstract. Complexity theorists routinely compare—via the pre-ordering induced by asymptotic notation—the efficiency of computers so as to ascertain which offers the most efficient solution to a given problem. Tacit in this statement, however, is that the computers conform to a standard computational model: that is, they are Turing machines, random-access machines or similar. However, whereas meaningful comparison between these conventional computers is well understood and correctly practised, that of non-standard machines (such as quantum, chemical and optical computers) is rarely even attempted and, where it is, is often attempted under the typically false assumption that the conventional-computing approach to comparison is adequate in the unconventional-computing case. We discuss in the present paper a computational-model-independent approach to the comparison of computers’ complexity (and define the corresponding complexity classes). Notably, the approach allows meaningful comparison between an unconventional computer and an existing, digital-computer benchmark that solves the same problem.

Keywords: Computational complexity, computational resource, dominance, theoretical computer science, unconventional computation.

*
Unconventional Complexity Measures for Unconventional Computers (2009)  [Journal paper]*

In Natural Computing, vol. 10, no. 4, G. Rozenberg, H.P. Spaink (editors-in-chief), Springer, ISSN 1567-7818 (print)/1572-9796 (online) (invited paper)
DOI: 10.1007/s11047-010-9226-9

This peer-reviewed, invited journal paper appears in Natural Computing. It forms part of the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models.

Abstract. Unconventional computers—which may, for example, exploit chemical/analogue/quantum phenomena in order to compute, rather than electronically implementing discrete logic gates—are widely studied in both theoretical and practical contexts. One particular motivation behind unconventional computation is the desire efficiently to solve classically difficult problems—we recall chemical-computer attempts at solving NP-complete problems such as the Travelling Salesperson Problem—, with computational complexity theory offering the criteria for judging this efficiency. However, care must be taken here; conventional (Turing-machine-style) complexity analysis is not always appropriate for unconventional computers: new, non-standard computational resources, with correspondingly new complexity measures, are often required. Accordingly, we discuss in the present paper various resources beyond merely time and space (and, indeed, discuss various interpretations of the term ‘resource’ itself), advocating such resources’ consideration when analysing the complexity of unconventional computers. We hope that this acts as a useful starting point for practitioners of unconventional computing and computational complexity.

Keywords: Complexity; computational resources; factorization; precision; unconventional computation.

What is a Resource? (2009)  [Conference slides]

This conference presentation was given at Complexity Resources in Physical Computation. The slides used in the presentation are available here. The work forms part of the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models.

Abstract. We discuss the notion of ‘computational resource’ in a general, computation-model-independent context, in particular considering several interpretations of resource: as commodities consumed during computation; as operations permitted by the paradigm of computation, or by enveloping physical laws; as a system’s manufacturing (as opposed to running) costs; etc. We argue that qualifying our notion of resource using only something like Blum’s axioms still allows undesirable complexity behaviour, and suggest suitable further restriction.

Beyond Blum: What is a Resource? (2008)  [Journal paper]

In the International Journal of Unconventional Computing, vol. 6, nos. 3 – 4, A. Adamatzky (editor-in-chief), Old City Publishing, ISSN 1548-7199 (print)/1548-7202 (online)

This peer-reviewed journal paper appears in the International Journal of Unconventional Computing. It forms part of the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models.

Abstract. When analysing a Turing machine’s complexity, we can appeal to decades of experience to determine which resources (typically time steps or tape cells) to measure. More fundamentally, we have Blum’s criteria for admission as a valid resource.

When analysing a non-Turing computer’s complexity, the situation is less clear. What resources are relevant for, say, an analogue computer? Can we meaningfully compare a Turing machine’s time complexity with an optical computer’s precision complexity? Crucially, what should we admit as a resource in the context of non-standard computation?

Our aim is to specify a suitable, non-Turing-computer (in fact, not-necessarily-Turing-computer) analogue of Blum’s axioms. We start with the existing axioms, but show that, alone, they are insufficient. Accordingly, we further restrict the notion of resource: we define normality of resource, advocating consideration only of normal resources during non-standard computers’ complexity analyses. This eliminates some ‘deceptive’ complexity behaviour encountered with Blum’s axioms alone.

Keywords: Computational complexity; non-Turing/non-standard computer; resource; Blum’s axioms; normalization; dominance.

Dominance: Consistently Comparing Computational Complexity (2008)  [Research report]

Oxford University Computer Science department’s Research Report series, RR-08-09

This report appears in Oxford University Computer Science department’s Research Report series as RR-08-09. It forms part of the EPSRC-funded project, Complexity and Decidability in Unconventional Computational Models.

Abstract. Though complexity theory strives primarily to categorize problems according to their complexity, it is typically the complexity only of methods of solving problems that we can directly measure. Specifically, we have an upper bound for a problem’s complexity: namely, the complexity of the most efficient known solution method for the problem. In order to improve such bounds, it is desired to consider sets of solution methods that are as complete as possible; but, whilst comparisons of computing systems’ respective efficiencies can—via O-notation—readily be made when the systems conform to the same computational model and when the comparison is with respect to the same resource, it is not clear how to compare model-heterogeneous sets of solution methods—this imposes a constraint on the size of sets of methods that can be considered, and a corresponding constraint on the quality of bounds on the complexity of problems.

Here, we propose the notion of dominance as a way of augmenting the power of O-notation to compare systems’ complexity. In particular, our augmentation allows meaningful comparison of the respective complexities, with respect to different resources, of computing systems that conform to different computational models, thus allowing consideration of larger, model-heterogeneous sets of solution methods and, hence, improved bounds on problems’ complexity.

Computational Complexity in Non-Turing Models of Computation; The What, the Why and the How (2008)  [Journal paper] [Conference paper] [Conference slides] [Conference video]

In Electronic Notes in Theoretical Computer Science, vol. 270, no. 1, B. Coecke, I. Mackie, P. Panangaden, P. Selinger (guest editors), Elsevier, ISSN 1571-0661
DOI: 10.1016/j.entcs.2011.01.003
In the proceedings of Quantum Physics and Logic/Development of Computational Models 2008

This peer-reviewed journal paper appears in Electronic Notes in Theoretical Computer Science. It was presented at Quantum Physics and Logic/Development of Computational Models 2008, in the peer-reviewed proceedings of which an earlier version of the paper appears. A video of the presentation is available here, and the slides used therein here.

Abstract. We preliminarily recap what is meant by complexity and non-Turing computation, by way of explanation of our title, ‘Computational Complexity in Non-Turing Models of Computation’.

Based on investigation of a motivating example, we argue that traditional complexity theory does not adequately capture the true complexity of certain non-Turing computers, and, hence, that an extension of the theory is needed in order to accommodate such machines.

We propose a framework of complexity that is not computation-model-dependent—that, rather, is extensible so as to accommodate diverse computational models—, and that allows meaningful comparison of computers’ respective complexities, whether or not the comparison be with respect to different resources, and whether or not the computers be instances of different models of computation.

Whilst, we suggest, complexity theory is—without some modification—of limited applicability to certain non-standard models, we hope that the ideas described here go some way to showing how such modification can be made, and that members of the non-Turing-computation community—not least participants of Quantum Physics and Logic/Development of Computational Models 2008—find these ideas both useful and interesting.

Keywords: Computational complexity, non-standard/non-Turing computational models, precision.

Factorizing RSA Keys, an Improved Analogue Solution (2008)  [Journal paper] [Conference paper] [Conference slides]

In New Generation Computing, vol. 27, no. 2, Y. Suzuki, M. Hagiya, H. Umeo, A. Adamatzky (guest editors), Ohmsha/Springer, ISSN 0288-3635 (print)/1882-7055 (online)
DOI: 10.1007/s00354-008-0059-3
Earlier version in Natural Computing, Proceedings in Information and Communications Technology, vol. 1, Y. Suzuki, M. Hagiya, H. Umeo, A. Adamatzky (editors), Springer, ISBN 978-4-431-88980-9
DOI: 10.1007/978-4-431-88981-6_2

This peer-reviewed journal paper appears in New Generation Computing. It was presented at the Second International Workshop on Natural Computing, in the peer-reviewed proceedings of which an earlier version of the paper appears. The slides used in the presentation are available here.

Abstract. Factorization is notoriously difficult. Though the problem is not known to be NP-hard, neither efficient, algorithmic solution nor technologically practicable, quantum-computer solution has been found. This apparent complexity, which renders infeasible the factorization of sufficiently large values, makes secure the RSA cryptographic system.

Given the lack of a practicable factorization system from algorithmic or quantum-computing models, we ask whether efficient solution exists elsewhere; this motivates the analogue system presented here. The system’s complexity is prohibitive of its factorizing arbitrary, natural numbers, though the problem is mitigated when factorizing n = pq for primes p and q of similar size, and, hence, when factorizing RSA keys.

Ultimately, though, we argue that the system’s polynomial time and space complexities are testament not to its power, but to the inadequacy of traditional, Turing-machine-based complexity theory; we propose precision complexity (defined in our previous paper [1]) as a more relevant measure, and advocate, more generally, non-standard complexity analyses for non-standard computers.

Keywords: Factorization, Analogue Computer, Computational Complexity, Precision Complexity, RSA Cryptography.

1. Blakey, E., “On the Computational Complexity of Physical Computing Systems”, in Proceedings of Unconventional Computing 2007 (Adamatzky, A; Bull, L.; De Lacy Costello, B.; Stepney, S. and Teuscher, C. editors), Luniver Press, pp. 95–115, 2007.

On the Computational Complexity of Physical Computing Systems (2007)  [Conference paper] [Conference slides]

In Unconventional Computing 2007, A. Adamatzky, L. Bull, B. De Lacy Costello, S. Stepney, C. Teuscher (editors), Luniver Press, ISBN 978-1-905986-05-7

This peer-reviewed conference paper was presented at Unconventional Computing 2007, in the proceedings of which it appears. The slides used in the presentation are available here.

Abstract. By far the most studied model of computation is the Turing machine model. It is the suggestion of Church’s Thesis that, as far as computability is concerned, consideration of this model alone is sufficient; but what of complexity?

The traditional notion of computational resource, in terms of which computational complexity may be defined (since, in essence, complexity is nothing more than required resource viewed as a function of input size), caters almost exclusively for algorithmic models of computation such as the Turing machine model. Accordingly, the most commonly encountered computational resource is run-time of a Turing machine or algorithm or equivalent.

We here extend the definition of resource, notably so as to include the physical notion of precision with which we can make measurements; this allows characterization in a more insightful way of the complexity of computations performed by analogue, DNA, quantum and other physical computers.

The primary intent of this (ongoing) work is not that it be of practical use by, for example, offering physical solution methods that improve upon the speed, efficiency or similar offered by existing—especially digital—computers. Rather, the work is a theoretical study: the wider, more physical framework of computation is presented as a context in which to generalize the closely related notions of resource and complexity. The ultimate aim, from this theoretical viewpoint, is to shed light on whether complexity is inherent in problems as opposed to solution methods, whether the latter be physical or algorithmic.

An Analogue Solution to the Problem of Factorization (2007)  [Research report] [Patent]

Oxford University Computer Science department’s Research Report series, RR-07-04
Associated patent: System and Method for Finding Integer Solutions, United States patent 7453574

This report appears in Oxford University Computer Science department’s Research Report series as RR-07-04. The method of factorization described is the subject of a US patent, applied for by IBM and with the author as sole inventor.

Abstract. The task of factorizing a given integer is notoriously difficult, to the extent of rendering computationally infeasible the extraction of factors of numbers beyond a certain size. This infeasibility is what makes the RSA cryptographic system, for example, secure.

We describe an analogue method1 of factorizing. Just as with traditional algorithms, there is a practical limit to the size of numbers that the method can factorize; in contrast with traditional algorithms, however, the method suffers no increase in calculation time as the input number approaches this limit.

The process described exploits a direct physical implementation of a geometric formulation of the problem of factorizing; this allows factors of numbers within the allowed range to be ascertained (or else primality guaranteed) virtually instantaneously.

1 The method is the subject of a pending US patent, applied for by IBM and with sole inventor Ed Blakey.

A Cellular-Automatic Implementation of Communication Channels, with particular consideration of Several Intersecting Channels (2002)  [MSc dissertation]

Unpublished academic dissertation

This dissertation was submitted as part of the author’s studies for the degree of Master of Science in Mathematics and the Foundations of Computer Science (Oxford, 2001 – 2002, Distinction).

Abstract. Channels of communication (e.g. wires along which are sent electrical pulses which encode a bit-stream) are in reality, and in particular in three spatial dimensions, free to pass over or under each other so as to cross over without intersecting; a rudimentary consideration of their two-dimensional analogue shows however that, assuming wires of strictly positive thickness, to cross two channels requires that they intersect. It is therefore natural to ask whether a system of two such intersecting channels can in a suitable, two-dimensional mathematical model be implemented in such a way that each channel can successfully carry its respective data from source to sink, without the two input-streams interfering (specifically at the intersection); this is the problem on which this dissertation focuses.

The approach is to model channels in a way reminiscent of both cellular automata and systolic arrays; notably, both space and time are rendered discretely. It is shown that in this model, the problem described above can be solved (a claim whose proof is constructive)—this is arguably the most insightful conclusion of the dissertation. It is then natural to consider the optimality (in various senses) of a solution; accordingly such discussion is presented here. Consideration is also given to the success with which the ‘spirit’ of the intuitively-stated motivating problem is captured by the mathematical model.

Concluded in this dissertation is that there exist solutions to those problems potentially arising in situations in which channels intersect, and also that to cross channels instead of routing them around each other is in some senses beneficial.

The Group Theory Behind Rubik’s Cube (2000)  [BA extended essay]

Unpublished academic essay

This extended essay was submitted as part of the author’s studies for the degree of Bachelor of Arts in Mathematical Sciences (Oxford, 1998 – 2001, First Class Honours).

Abstract. This essay is a study of Rubik’s cube in the context of group theory. It deals with a standard 3×3×3 cube, exploring the group of available moves and its action on sets of the cube’s constituent parts, discussing conjugation and parity, and ultimately showing the group of moves to be isomorphic to a better-understood group. The essay is not a ‘hint book’ for the cube—I don’t offer algorithms to solve it—but rather a discussion of the group theory involved.


Some of the above publications are also available on websites including ORA, Google Scholar, DBLP, PubZone, arXiv and Driver.

Page last updated on 13.iii.2013.