Monday, September 23, 2013

Just because fingerprints can be hacked doesn't make them useless in the iPhone 5S

As this article states, the fingerprint reader of the new iPhone 5S has been hacked by the Chaos Computer Club.

But does that mean Apple is "stupid" as they say, and that fingerprint authentication is unwise?

No, for the following reasons:

  • Right now, many people avoid using passcode locking because it is slow. This method will encourage them to lock their phones because it is faster to unlock them.
  • Passcode locking is almost certainly less secure than hackable-fingerprints due to the possibility of people looking over one's shoulder.
  • The average thief who decides to keep a lost phone they found or mugs someone and runs off with their phone generally won't have time to perform sophisticated fingerprint forging before the owner of the iPhone locks or wipes their device remotely.
  • It improves accessibility for the blind.

The lesson is that we should approach security from several directions. Avoid keeping critical information in plaintext on any computer or phone, protected by just one method. Use two-factor authentication, obfuscation, and passwords/passcodes in addition to fingerprints for such data. Also arrange for remote wiping in advance.

I have other suggestions for Apple (and others thinking of using this technology).

  1. Use geofencing. As an option, allow fingerprint-only access when in the home or other places that the phone recognizes it spends a lot of time; it could 'learn' the users workplace geographic coordinates, but require the passcode when elsewhere.
  2. Allow longer time intervals for passcode-required access. Currently the passcode can be required immediately, or after an interval has passed, with settings p to 15 minutes. The only other alternative is 'no passcode'. However, an interval of half an hour or an hour or even a day could be very useful too, to deter theft, especially in conjunction with geofencing and entry of an Apple ID for changing the passcode.
  3. Keep developing biometrics: Fingerprint recognition combined with facial recognition and/or voice recognition could double the difficulty of hacking. For example, with both fingerprint and facial recognition (both instant) a hacker couldn't just lift a fingerprint without also obtaining a photo of the user. That would require knowing whose phone it is. 

The idea is that someone reluctant to enter their passcode very often might be more willing if it was required only once in a while.


Wednesday, May 29, 2013

My policy on link spam in comments on my blog

More and more often I receive emails to moderate 'link spam', in other words links embedded in a comment on my blog that are primarily or solely intended for 'search engine optimization'.

The comments often say something like 'Great blog, good points'. Sometimes they are actually well-thought-out comments on the material in the post, and are attached to a relevant post. However I do not accept comments with links unless the comment and all the links meet the following criteria:

  1. The text on which the link is placed makes it clear to the reader of the link where it points, e.g. your company, your product, yourself or an informational site.
  2. The comment itself says something relevant, and is not there for the sole purpose of exposing the link.
  3. I believe that the linked page has relevant information (or a product or service) that matches the subject of my blog post and adds value to the post. The information does not have to agree with what I have said; in fact I welcome argument and contradiction.
  4. If the comment mentions a product, service or company then what is being marketed is something that I am not morally opposed to and think readers of the post could potentially benefit from (although I would not ever endorse or even verify products or services in links).
  5. The poster uses a verifiable identity. They must give their email, some other legitimate means of contacting them, or else the linked page or site needs to list a person with this name when searched. I sometimes will contact the person to verify it is them.
  6. The site being linked to is, in my opinion and at the surface level, legitimate and respectable and neither plastered with advertisements nor poorly crafted.

Here's an example: Today there was a comment from an accounting company on my post about solar energy. Upon visiting the company's website it seems the company provides services to help people cost-justify solar installations. Points 3, 4 and 6 seemed to be satisfied, so I would have accepted the link if the other rules had been followed.

Here is the text of the comment, however:

Hi, nice post. Well what can I say is that these is an interesting and very informative topic on solar energy financial management. Thanks for sharing your ideas, its not just entertaining but also gives your reader knowledge. Good blogs style
too, Cheers!
This kind of 'flattery' adds little of relevance. I would have accepted it as friendly encouragement if there were no links, but the presence of a link makes such wording violate rule 2, since there is no additional useful information.

To add something even slightly useful and not break rule 2, the poster could have said, "People buying solar installations may need help doing the needed financial analysis; companies like ours can help with that."

The link was buried under "solar energy financial management".  Since the page linked was not a general page about that topic (e.g. a wikipedia page or some other pure unbiased information site) then rule 1 is being violated. To avoid breaking rule 1, the linker needed to put the link on the name of the company.

Furthermore, the person leaving the comment gave the name of a person, but a search yielded no such person at the company in question, violating rule 5.

I suggest that bloggers in general adopt rules similar to mine.




Friday, May 24, 2013

UML in Practice talk at ICSE: And How Umple Could Help

I just  finished attending the ICSE talk by Marian Petre of Open University, entitled "UML in Practice"

She conducted an excellent interview-based study of 50 software developers in a wide variety of industries and geographical locations. Her key question was, "Do you use UML".

She found that only 15 out of 50 use it in some way, and none use it wholeheartedly.

A total of 11 use it selectively, adapting it as necessary depending on the audience. Of this group use of diagram types was: Class diagrams: 7, sequence diagrams: 6, activity diagrams: 6, state diagrams: 2 and use case diagrams: 1.

Only 3 used it for code generation; these were generally in the context of product lines and embedded software. Such users, however, tended not to use it for early phases of design, only for generation.

One used it in what she called 'retrofit' mode, i.e. "Not unless the client demands it for some reason".

That leaves the 35 software developers who do not use it (70%). Some reported historical use, and some of these did in fact model using their own notation.

The main complaints were that it is unnecessarily complex, lacks and ability to represent the whole system, and has difficulties when it comes to synchronization of artifacts. There were also comments about certain diagram types, such as state machines being only used as an aid to thinking. In general, diagram types were seen as not working well together.

She did comment on the fact that UML is widely taught in educational programs.

My overall response to this paper is, 'bingo'. The paper backs up research results we have previously published, which served as a motivation for the development of Umple.

Features of Umple that are explicitly designed to help improve UML adoption include:
  • Umple can be used to sketch (using UmpleOnline) and the sketch can become the core of high quality generated code later on.
  • It is a simplified subset of UML, combatting the complexity complained about in the Petre's research.
  • It explicitly addresses synchronization of artifacts by merging code and UML in one textual form: UML, expressed textually is just embedded in code, with the ability to generate diagrams 'on the fly', and edit the code by editing either the code or those diagrams.
  • It integrates diagram types: State machines work smoothly with class diagrams, for example.
  • Diagrams like state machines finally become useful in a wide variety of systems, not just embedded systems.
I hope that if Umple can become popular, then in a few years, we could do a study like this and report quite different results.

Scaling up Software Engineering to Ultra-Large Systems: Thoughts on an ICSE Keynote by Linda Northrup


Linda Northrup just gave an interesting talk at ICSE 2013 about ultra-large scale systems (ULS).

My takeaway from this talk are the following points:

  • ULS refers to systems with large volumes of most of the following factors all combined together synergistically to increase complexity: source code in multiple languages and architectures, data, device types and devices, connections, processes, stakeholders, interactions, domains (including policy domains) and emergent behaviors.
  • ULS systems run in a federated manner; they are on all the time, with inevitable failures handled and recovered locally, so as not to effect the system as a whole. The analogy to the functioning of a city (where fires occur every day) was very apt.
  • Build-time and run-time are one-and-the-same: Pieces of a system need to be replaced on the fly, and dynamic updating and reconfiguration needs to be possible.
  • They inevitably involve 'wicked' problems with inconsistent, unknowable requirements that change as a result of their solution.
  • Development can neither be entirely agile (due to the need to co-ordinate some aspects of the system on a vast scale), nor follow traditional 'requirements-first' engineering. On the other hand, parts of a system can be developed in an agile manner.
  • All areas of software engineering and computer science research can be used to help solve issues in ULS. Examples include HCI studies of how diverse groups of users use diverse parts of such systems, or computational intelligence applications to such systems.

She gave some examples including the smart grid, climate modelling, intelligent transportation and healthcare analytics. Actually It is not clear to me that climate modelling necessarily fits the definition. It may have large volumes of code, and run in a distributed manner, with federated models, and quite a few stakeholders and policy domains, but do a majority of the other factors above apply? Perhaps.

From my perspective, key to ensuring that ULS systems can be build and work properly are to apply the following techniques and technologies. However, in order to do this we need to properly educating computer scientists and software engineers with knowledge about these items that we know today, but which is not universally taught, and hence not applied:

  1. Model driven development (with tools that generate good quality code in multiple languages and for multiple device types)
  2. Distributed software architecture and development
  3. Rugged service interfaces so subsystems can be independent of each other, and have failsafe fallbacks
  4. Test-driven development: Where requirements are unknowable, it is still possible to specify those parts of systems that can be understood with rigorous tests. Subsystems so-specified can then be confidently plugged together as requirements evolve.
  5. Spot-formality: Formal specification of parts of a federated ULS system that are critical to safety, the economy, or the environment. 
  6. Usability and HCI to ensure that the human parts of the system interacts with the non-human parts effectively.


My Umple research helps address item 1, and is moving towards addressing items 2, 3 and 5. We deploy item 4 and 6 in the development of Umple.

Sunday, May 19, 2013

Some lessons from MiSE at ICSE

I just finished attending the two-day Modeling in Software Engineering workshop at the International Conference on Software Engineering in San Francisco.

Here are some of the take-away lessons for me (these do not necessarily reflect the ideas of the speakers, but my interpretations and/or extensions of their ideas)

Industrial use of modeling: There was very interesting discussion about the use of modeling in industry, but there seem to be two key and related directions for such use: Michael Whalen on Saturday gave lots of examples of the use of Matlab and SImulink in various critical systems (and particularly the use of StateFlow). Lionel Briand, on the other hand talked about using UML and its profiles to solve various engineering problems, again, however, he mostly focused on critical systems. In a panel he pointed out that most of the Simulink models he had worked with are just graphical representations of what could just as well be written in code (i.e. with little or nothing in the way of additional abstraction).

What struck me was that both presenters, and others, seemed to embrace what I might call 'scruffy' modelling: Briand talked about users adapting UML to their needs, and others talking about SImulink as  a tool that does not have the formal basis of competing tools, but nonetheless serves its users well.

Many people in the workshop pointed out that we need to boost the uptake of modelling. Various ways to achieve this were emphasized:

  • Improve education of modelling
  • Build libraries of examples, including exciting real-world ones, and ones that show scaling up
  • Make tools that are simpler and/or better so more 'ordinary' developers will consider taking up modelling
  • Allow modeling notations to work with each other and other languages and tools

It turns out that all four of these have long been objectives of my Umple project. So it seems to me that if the Umple project pushes on at its present pace, we stand to have a big impact.

Speaking of Umple, I gave a short presentation that seemed to be well received, although my personal demonstrations to a number of participants seemed much more effective with people appearing to be quite impressed. The lessons from this is that people really can see the advantages of our approach, but a hands-on and personal approach may work best, as a way to help people see the light.

Context: Another theme of the MiSE workshop that repeatedly appeared was 'context'. Briand pointed out that understanding the problem and its context is critical before working on a model-based solution; the modelling technique to be used will depend deeply on this context. Context can be requirements vs. design, or the specifics of the domain, the fact that space systems must be radiation hardened, or some aspect of the particular problem.

In my opinion, they are certainly right: Understanding the context is critical, and the tool, notation or technique needs to be selected to fit the context, However I also believe that we need to work on generalities that can apply to multiple contexts, in the same manner that general-purpose programming languages can be used in multiple contexts. For example the general notion of concept/class generalization hierarchies can be applied in almost every context, whether it be modeling the domain, specifying requirements for the types of data to be handled, or designing a system for code generation. I think state machines can also be applied in a wider variety of contexts, where people currently do not apply them: They are applied in many real-time systems, and they have been applied for specifying the navigation in user interfaces. But in my experience they can be applied in systems such as in this Umple example.

Testing: An interesting theme that came up several times related to testing: It was pointed out that it is worthwhile to generate tests from a model, but it also must be respected that in the context of a model used to generate code, these tests serve only to verify that the code generator is working properly! Such tests do not validate the model. Additional testing of the system is always essential.

Semantics and analysis: There was a lot of agreement that the power of modeling abstractions can be leveraged to enable analysis of the properties of systems. To do this however, it seems to me that semantics needs to be pinned down and better defined. 'Scruffy' use of UML and simulink seem to detract from these possibilities. Again, one of the objectives of Umple is to select a well-defined subset of UML, to define the semantics of this very well, and and to be able to analyse system designs in addition to generating systems from the models.


Saturday, April 6, 2013

Tips for doing well in a science fair from a long-time judge, and long-ago participant

For many years, I have been a judge at the Ottawa Regional Science Fair. I was also once a judge at the Canada Wide Science Fair. From grades 7-12, I entered science fairs every year and won some prizes.

The following is a bit of wisdom for youth who want to do really well, impress the judges and win prizes.

1. Start really, really early. For example, if the science fair is in March, think about your project and get going in January, or even October. I am always sad when I am judging a project, notice a problem, point it out, and the student says, "yes, I know, I noticed that too, but it was too late, I was doing the experiment just a couple of days before it was due". Think about entering a science fair just like you would a sports contest: Plan to enter and take the time needed to get better and better at it. Don't treat it like a piece of homework. When I was a teenager, I actually worked on one project over three years, and entered different 'phases' of the project as I got better and better.

2. 'Know your stuff' well. Spend extra time reading books from the library, reading on the Internet talking to your parents (if they know about science), talking to your teachers, contacting real scientists by email, and so on. Look up things you don't understand.

3. Be imaginative and try out different things: The more creativity you show, the more you will impress the judges. This goes back to the first point: To be creative, you need time to try out the ideas you have, and maybe even to start again, or explore different approaches if the first approach doesn't work.

4. Avoid doing experiments that are exactly the same as others have done, or that come straight out of books and websites. Certainly it is good initially to try experiments that you copy from others, to learn how to do science. But for a science fair you want to change things a little and try different variations from what others have proposed.

5. Start small, and then add more and more to your project as you learn more and get better. When I was a teenager I learned how to make some electronic circuits from a kit and got pretty good at making them work. Then I got a book that told me how to design electronics, bought a bunch of components and made a very complicated system. It looked really impressive, but it didn't work properly. What I should have done would have been to start with something very small, get it working, and then repeatedly try something a little more sophisticated. By the way, I did win a prize for my system that 'didn't work', but I might have one a bigger prize of I had approached it more slowly, getting each new bit working as I added it. The same advice applies if your project is a computer program: Start with a simple program, and get it working. Add a little more and get that working. Keep doing this repeatedly. We call this approach 'agile'.

6. Make sure you learn key aspects of the scientific method if your project is an experiment. The following are some examples:

  • Test with more than one of each thing. So, for example, if you are growing plants with three different types of fertilizer, don't just grow three plants, see if you can grow 9 (three of each). If you don't do this, and one ends up being smaller or dying, you don't know whether it was because of the fertilizer, or because it caught a disease, or just was slightly different naturally.
  • Repeat your experiment. This is similar, to the ideas is that you try your whole experiment again to ensure you have the same result. You obviously need time to do this.
  • Make sure you have a control. In the above case, that would mean one group of plants has no fertilizer.
  • Make sure you keep everything else constant. In this example, that would mean that all the plants get the same soil, pot, sunlight, temperature, etc. Each plant should start out the same size as well (e.g. from seed).
  • Make sure you measure everything relevant. In the plant example, you might measure growth every day, but you could also measure the colour and shape of the leaves for example.
  • Use the right measuring tools and practice measuring so you know you are getting the right measures. For example, I judged a science fair where three different projects needed to measure the amount of salt in water (salinity). One of them measured pH (acidity) instead, another measured density instead, but the third got a kit for measuring salt in pools. That was by far the best choice. And report your results using the metric system: This is what scientists all around the world use. 

7. Don't get your parents to do the work for you. Use your parents for advice; have them help with tricky things, but don't let your parents take the lead. By all means do some projects with your parents, but for the science fair you need to show what you have done mostly independently. Judges can almost always see  'parents work'. It stands out as sophisticated stuff that the student can't really explain fully.

8. Make your display look really nice. Use graphs, photos, and diagrams. Give nice headings, organizing different aspects of what you are presenting such a  'Background', 'Hypothesis' (the main idea you are testing out, 'Method', 'Results' and 'Conclusions'. Emphasize key points and words using colour, bold type, etc. Where you are showing text, make sure it is in big print, big enough so somebody can read it who is standing a about 150cm away. Don't write paragraphs or even full sentences: Just right abbreviated points. If you want to also say things as paragraphs and sentences, put these in a separate report that you would show on your table.

9. When presenting, focus on what you did, your results, and your conclusions. Avoid taking too much time on the background (the judge can read that or ask you questions), and avoid talking too much time talking about unrelated topics. Several times I have judged environment projects where the students did a nice experiment, but they spent a lot of time in their presentation focusing on the bad state of the world's environment, rather than the details of their own project.

10. Don't ever read from a script: Presentations work best when you are talking freely (extemporaneously). If you find this hard, practice over and over.

11. Accentuate the positive. If you have had results that have partly worked, and partly not, be honest and admit that you were only partly successful, but emphasize your success. I had one case where a student said his experiment didn't work (he had expected water to be was completely desalinated) when in fact he could have said, "I reduced the salinity by 75%". In my own system I talked about in point 5 above, I focused on the bits that did work.

12. Learn the 'rules' of the science fair. For example, the chemicals, electrical devices and water you can have on display will be limited. You need to know this so your whole exhibit won't be rejected for safety reasons on judging day. Have photographs (printed or on a computer) of any equipment you cannot display. Make sure you also know how wide and high you can make your display; often you are allowed to make a higher display than you might think. That can give you more space to display interesting things. When I was a youth, I made a double-high display with pull-down 'blind' type additional information one year; I won a trop to the Canada Wide Science Fair. Next year at the Canada Wide Science fair, almost everybody had tall displays.

13, Practice presenting your project in front of others before judging day: Make sure you can describe it in the allotted time (e.g. 8-10 minutes). Have others ask you unexpected and challenging questions so you can practice giving answers 'on the spot'. The others could be parents, other teachers, cousins, uncles and aunts: Just ask people of they are willing to be an audience.

14. Remember that regardless of whether you win a prize you have won by learning a lot about you subject, learning how to do science, and learning how to work independently.

Monday, February 25, 2013

Solar power has a bright future - provided sensible government policy is applied

This morning in the Oil Drum there is an excellent article on pricing of solar power.

Takeaway messages from this article are:


  • Solar power prices are now in many markets lower than what consumers pay for electricity on the grid. This is due to dramatically reduced prices of panels and inverters due to economies of scale and technological improvement. This trend will continue; just as computer prices trend down as technology improves, the same thing will happen for solar photovoltaics.

  • Due to the above it now pays to install and generate your own power at sunny southern latitudes and the positive-payoff geographical regions will steadily expand (latitude is the biggest factor, but cloud cover is also an issue). Hence more and more people will install such systems, including both private consumers and companies. In the long run this bodes very well for lowering fossil fuel consumption and reducing future climate change.

  • The market for producing solar equipment has shifted to low production-cost markets, as happened with other technology products. That is harmful to the production industry in developed countries, but on the other hand the installation industries should continue to experience growth and profits due to demand for installation, and energy-intensive industries will benefit from cheaper power. Ultimately there should be tremendous net gains to economies that encourage installation.

  • Governments have been fouling up marketing by suddenly chopping feed-in tariffs. These are fixed rates paid for electricity produced on your rooftop. The problem was that these were set at extremely high levels, and then governments realized that with the dramatically lower costs of solar production, the tariffs were way too high. However, rather than cutting them entirely, they need to be brought down to sensible levels, so it is still possible to sell to the grid. Society will benefit tremendously from having a solar generator on most roofs. But since the sun only shines some of the time, and on sunny summer afternoons such installations make much more electricity than needed in the underlying building, it is necessary to sell excess power to the grid. Without this ability the impetus to install is significantly reduced. Rates should be set at an economically justifiable level that changes over time and that is sufficient to ensure people will install systems, but also ensures no windfall profits.

  • Governments also need to set up the right environment for investment in transmission and storage of solar-generated power.

  • Even going off-grid entirely (which requires setting up your own storage system) is beginning to become an attractive option, and will become more attractive over time for all consumers. 

  • The market for electric cars will be boosted in tandem with in increasing installation of solar photovoltaics, since recharging your own vehicles will result in big cost savings, and also your vehicle, when not in use, serves as a storage.

  • Systems installed today can be expected to last 10-20 years with significant maintenance (inverter replacement) at about 10 years. However as with all technology, reliability is likely to improve, so even longer time horizons may be possible, and systems installed today may last longer than expected.


The article has a lot of very interesting equations that can be used by businesses, consumers and economists to properly work out the business case for solar power.

Wednesday, February 13, 2013

Why many queries about God refer to the Ottawa Senators and Daniel Alfredsson

Today I have been asked to appear on CBC radio to explain why Apple's Siri is responding to certain questions about God with answers that imply that Daniel Alfredsson is God! Here's a link to the Podcast URL from CBC Ottawa's 'All in a day' show, which featured the interview.

The questions 'What does God Look like' and 'Show me a picture of God' show the following.



When Asked 'What is god's home town?', the reply is Gothenburg Sweden.

When asked what team does God play for, the response shown below is: 'The senators defeated the Sabres by a score of 2 to 0 yesterday'



My guess is that this is happening for one of the following reasons:


  • Someone at Apple (a Sens fan) of a small group has planted this deliberately.
  • A bunch of people on the web have tagged  Daniel Alfredsson on the web as 'God' (or someone has been quoted as referring to him as God) and Siri is finding this information and making the wrong inference.
  • It is a random bug in the software that Siri uses (less likely)
Note that even Watson, of Jeopardy fame, made some errors, and Siri isn't anywhere near as sophisticated. Most questions on Siri about God, turn up answers indicating the 'religion is for humans' or proposals to do a web search for the answer. This happens when you ask for photographs of God, for example.