I was sparked to create this post by reading a tweet that led me to this Letter to PC Chairs. In it, a group of eminent computer scientists express how they want the field "to become a better empirical science by promoting both observation and reproduction studies."
I applaud this initiative. Much computer science research needs better empirical validation.
However I have some criticisms of the open letter as it stands. Firstly, the push to improve empirical validation has been around for my entire career, especially in software engineering. The cited call to action suggests that what they are seeking is something new.
My main criticism, however, is that I have seen the pendulum swing too far the other way. In some conferences it is now common to have papers with good ideas rejected because they do not have a rigorous empirical evaluation. This causes just as many problems.
From my perspective, we are rejecting far too many papers of all kinds in computer science conferences. If a paper has a rigorous evaluation, or is a well-executed replication of an existing study, it should be published. If a paper has good ideas with thoughtful analysis, it should also be published, even if it doesn't have much of an evaluation. And if a paper has a healthy mix of these, in other words, an incremental idea idea with a moderate evaluation, then it also deserves to be published.
We shouldn't be rejecting a paper purely because "it has insufficient evaluation" as long it contains an interesting or novel idea. And we shouldn't be rejecting papers that are "pure empirical studies" or "mere replications". We need to be balanced.
Many of the good computer science conferences have very low acceptance rates (between 12 and 30 percent) and remain fairly fixed in their attendance numbers over the years. In fact, there is often stagnation, where a high proportion of conference attendees are graduate students presenting papers. Their supervisors and many other people in the field don't go because they have nothing to present. This is not going to promote debate and development of the field.
I agree that papers should be rejected if they are badly written, have too much wrong reasoning, have bad statistical analysis, have ideas that are "half-baked", don't say anything new, etc. But more papers with either decent ideas and/or empirical studies should be accepted, such that the conferences grow in size over time.
The authors of the "Letter to PC Chairs" point out that fields such as biology and medicine have a tradition of rigorous empirical evaluation. True. We can certainly learn from them. But there are also papers in these fields that are case studies, that express new ideas, or that simply describe a single sample of a new species, syndrome or medical procedure.
I have published many papers with empirical evaluation. I actually find that it is the "ideas" papers that are harder to get published. I would like to have discussion of the ideas at a conference before I embark on the years-long process of performing rigorous experiments.
I have probably been guilty of rejecting too many papers as a program committee member. My tendency is to want to be "fair and consistent" with how other papers will be treated, and if papers are being rejected for insufficient empirical evaluation, I tend to do the same. I have therefore contributed to some ideas papers being rejected that perhaps should have been accepted. I think they would have been accepted if there had been a carefully written set of crieria sent to all reviewers that encourages acceptance of a wide variety of types of paper.
A few years ago I co-authored the criteria for CSEE&T that describe the kinds of papers that should be acceptable in that conference. I actually think those criteria may have erred on the side of demanding too much rigorous empirical evaluation, at the expense of interesting, ideas-oriented papers.
Thoughts and ideas about how society could be improved if certain ideas were more widely known and understood. My eclectic mix of topics includes software usability, the environment, economics, politics and health.
Thursday, February 24, 2011
Monday, February 21, 2011
Automatic vibrate and loud mode calendar tagging: A phone feature that should be universal
Almost everyone has experienced the embarrassment of having their phone ring in a meeting or some other inappropriate place. We have probably all snickered when this has happened to some well-known person, and reached for our phone to make sure we have remembered to put it on vibrate mode.
By the same token, we have all probably missed calls because we left our phone on vibrate, or set to ring at an excessively low volume in noisy environments.
It seems to me that every smartphone should have the ability to tag calendar entries with 'vibrate only', and 'ring loud' which would override the default phone setting. While doing this, mobile OS producers should allow tagging of an event as 'airplane mode'. Value-added extensions to this would include features to automatically detect wireless requests for all phones to be put temporarily on vibrate, when in a theatre or library.
A web search indicates that support for aspects of this can be obtained on the BlackBerry. Certain HTC devices can be set to vibrate during calendar entries marked 'busy'. But I want to indicate I am busy in many contexts when I don't mind being phoned, such as when in transit to an appointment. Being busy in a calendar entry simply means not being available to be booked.
There may be patents impeding making this feature more widespread; this patent application, for example, covers vibrate mode for theatres. There may be others pending.
However IETF RFC 2445, the iCalendar standard, is the likely culprit. This 1998 standard predates smartphones and doesn't include automatic ringer modes such as what I am describing. Google calendar and Apple iOS iCal base their calendar formats on RFC 2445 so to deviate too far from it would render calendaring applications interoperable. There are already plenty of bugs that I think derive from attempting to adhere to the standard, such as this.
RFC 2445 does contain a provision for adding non-standard fields starting with 'X' to 'push the envelope'. However unless Microsoft, Google, RIM and Apple all agree to the same extensions, calendars will not sync properly, to the annoyance of customers.
What is really needed therefore is an update of RFC2445. I hope this will come about eventually and that it isn't inhibited by patents (I will have more to say about my concerns regarding software patents in future posts).
In the meantime, I hope at least the 'busy means vibrate' half-solution mentioned above will be implemented more widely.
By the same token, we have all probably missed calls because we left our phone on vibrate, or set to ring at an excessively low volume in noisy environments.
It seems to me that every smartphone should have the ability to tag calendar entries with 'vibrate only', and 'ring loud' which would override the default phone setting. While doing this, mobile OS producers should allow tagging of an event as 'airplane mode'. Value-added extensions to this would include features to automatically detect wireless requests for all phones to be put temporarily on vibrate, when in a theatre or library.
A web search indicates that support for aspects of this can be obtained on the BlackBerry. Certain HTC devices can be set to vibrate during calendar entries marked 'busy'. But I want to indicate I am busy in many contexts when I don't mind being phoned, such as when in transit to an appointment. Being busy in a calendar entry simply means not being available to be booked.
There may be patents impeding making this feature more widespread; this patent application, for example, covers vibrate mode for theatres. There may be others pending.
However IETF RFC 2445, the iCalendar standard, is the likely culprit. This 1998 standard predates smartphones and doesn't include automatic ringer modes such as what I am describing. Google calendar and Apple iOS iCal base their calendar formats on RFC 2445 so to deviate too far from it would render calendaring applications interoperable. There are already plenty of bugs that I think derive from attempting to adhere to the standard, such as this.
RFC 2445 does contain a provision for adding non-standard fields starting with 'X' to 'push the envelope'. However unless Microsoft, Google, RIM and Apple all agree to the same extensions, calendars will not sync properly, to the annoyance of customers.
What is really needed therefore is an update of RFC2445. I hope this will come about eventually and that it isn't inhibited by patents (I will have more to say about my concerns regarding software patents in future posts).
In the meantime, I hope at least the 'busy means vibrate' half-solution mentioned above will be implemented more widely.
Friday, February 18, 2011
A consumer charter for net neutrality, transparency and pricing
Internet service is one of the consumer utilities that the majority of people in wealthier countries now subscribe to. In some areas landline Internet service is a monopoly, like water, electricity or natural gas; in other areas it is a duopoly supplied by your choice of cable-TV or phone company (DSL or fiber). As such, the providers have a lot of market power. There are some factors reducing the market power: One can subscribe to satellite Internet, although this is expensive; and one can use 3G or 4G cellular service, but this has limited bandwidth and is also expensive. Regulators require some providers to share their lines on a wholesale basis, but this is not universal.
I think Internet service should be subject to more competition and the providers should be held to high standards. Consumers should know they will always benefit from net neutrality, have fair and competitively priced services, and be able to transparently see information about all aspects of their service.
In the following I consider small business owners to also be consumers, provided their main business is not network services. By 'quality of service' I am referring to the combination of factors such as the throughput (amount of data per unit time), latency (delay in having data sent or received), and jitter (variability in that delay). By ISPs, I include not just landline, but also wireless and satellite providers, unless I state otherwise.
As a consumer of Internet service here's what I would like to see all ISPs provide:
1. Full access to the Internet: A consumer should be able to access any server on any IP address on the Internet, should be able to use any DNS server to resolve any valid domain, and should be able to use any protocol supported by both the server and the Internet Engineering Task Force. Exceptions to this can only be made with the consumer's explicit consent, such as to protect children from inappropriate content, to prevent accidental access to dangerous sites like those that perform phishing. Rationale: Allowing ISPs to judge the appropriateness of Internet sites gives them too much power. There are certainly dangerous and illegal sites out there, but there are alternatives, such as security software and education measures, that are better than turning the ISP into a branch of the police.
2. Fair connection to all services. An ISP should not provide unfavourable routing or quality of service to any particular server or protocol on the Internet. It may provide faster direct connections or caching to popular services in order to speed up its overall service, but in making such arrangements with popular services, payments from services must be arranged only on a cost-recovery basis, and quality of services to other services must not be degraded. Any such arrangements with popular services must be disclosed to all consumers. Rationale: Consumers should be able to choose the services they access, rather than having to put up with what the ISP has chosen to favor.
3. Fair competition in value-added services: An ISP may not charge extra for routing Internet services of any kind to an external provider; this includes VOIP telephony, video services, tethering of one device to another (see my earlier post), and peer-to-peer services. An ISP may provide such services itself at extra cost, but if it does so, then the access to these services must also be available at no extra cost from anywhere on the Internet, and there must not be a reduction in price due to 'bundling' of more than 20% on any such service. Rationale: ISPs want to provide value added content however it is anti-competitive for them to make it faster or cheaper for consumers to choose their particular brand of content.
4. Special service quality for some doesn't affect others: A consumer may make arrangements to pay extra for particular quality of service for a particular application, such as guaranteed low latency and high bandwidth for a telemedicine application. However capacity for such services must be built and reserved separately such that quality of service for other consumers cannot decline, no matter how intensively this special service is used.
5. Throttling is transparent and kept to a minimum: An ISP may limit (throttle) the maximum bandwidth of a consumer's connection only if the following conditions are met:
a) The consumer's connection to the ISP is shared (wireless or shared cable for example);
b) The capacity of the connection is near its maximum;
c) The consumers subject to throttling are using substantially more capacity than many other users and it is occurring over an extended period of time;
d) The throttling will only be to the degree necessary and continue only as long as necessary to ensure users with lesser demands on the network can obtain good quality of service;
e) The throttling will applied gradually, not in a drastic manner (i.e. reducing maximum bandwidth by 10%, then 20%, then 30%, etc. as necessary);
f) The consumer is notified about the exact policy for throttling, and can opt to be notified whenever throttling is initiated, by methods such as email or text messaging;
g) Limitations on bandwidth are fairly applied to all of the user's connections, and protocols;
h) The consumer can at all times query the ISP to find out the exact amount of throttling currently in effect, and the amount applied at any given time over at least the last two billing cycles.
Rationale: Some people object outright to throttling, however all network connections to some extent involve sharing of a limited resource. The sum total of every customer's potential maximum throughput will always exceed the network's capacity. Therefore the ability to slow down the heaviest users in a fair way on an occasional basis is a necessary evil. Other utilities are also subject to throttling: In a drought, watering of lawns and filling of pools is limited. If a transmission line goes down, people will be asked to limit consumption or else suffer rolling blackouts. The criteria listed make throttling transparent, which it currently is not. This transparency will force ISPs to build capacity such that throttling becomes unnecessary.
6. Strictly limited packet inspection: An ISP must not perform packet inspection other than, a) to determine the protocol and routing of the user's communication for the purposes described in this document; b) to detect malware or hacking; c) under court order. Rationale: Deep packet inspection is intrusive and is akin to reading postal mail or wiretapping of traditional phone lines. Even the postal service would open a package to help determine where to deliver it to if the mailing label fell off, and would set aside packages that seemed dangerous.
7. No data modification without opt-in: An ISP must not modify packets or protocols, such as to place advertisements on web pages or to providing search pages when a DNS lookup fails, unless prior arrangements have been made with the consumer to allow this to happen. Such modification must only take place when the consumer explicitly opts in, and the consumer must be able to easily opt out at no cost at any time. Rationale: More and more ISPs deliver search pages and ads when a DNS lookup fails. Others place ads on web pages by hijacking parts of the html. This is unacceptable without explicit consent.
8. Free choice of anti-virus or security software: If an ISP provides at no charge its own custom or branded software for virus protection and other aspects of security, it must also provide deep discounts on other brands of similar software: Rationale: This type of software doesn't just benefit the consumer, it benefits the ISP too. However, consumers get locked into the ISP's brand, which may not work on all operating systems, and may double as a tool for selling bundled value-added services. Consumers need to be free to say no to the ISP's software, without feeling that they then must pay full price for something they could have got free.
9. No bloatware: An ISP must not require or even strongly suggest to users to install software provided by the ISP, nor must it partner with hardware providers to have such 'bloatware' pre-installed.
10. Free choice of hardware: An ISP must allow and facilitate consumers to use of their own modem and wireless network hardware, and must not price their own hardware at rates that unduly coerce consumers to choose their own brand. The only exception would be hardware that damages the network.
11. Stable IP Address: An ISP should refrain, where possible, from changing a user's IP address. Where this is not possible, the consumer must be able to determine the policies for when IP addresses will be changed. Rationale: Many ISPs rotate IP addresses, to make it harder for consumers to run servers and to make it harder for attackers to attack a particular consumer. However, there are unfortunate unintended consequences: Often websites that have been attacked will blacklist a particular IP address. When this is rotated to some other unsuspecting consumer, that new consumer will be denied services. Also long-lived client connections can crash as can home routers. Finally, as the next item points out, consumers should be able to run servers.
12. Servers allowed on a limited basis: Consumers should be able to run small-scale web and other servers as long as the total bandwidth of connections to servers does not exceed 10% of the consumer's allowed bandwidth on landlines, and 5% on wireless and connections. Excess usage would be subject to throttling in the manner described above. Rationale: The days when upload capacity was minuscule are long past except in satellite connections. Allowing people to run small-scale servers will make the Internet more open, as it was intended to be, and will foster innovation.
13. Usage limits with reasonable charges for overages: Internet access may be sold with soft limits on total usage per billing cycle, only if the following is respected: a) The consumer should be able to see their history of usage, and their usage limit; c) A mechanism must be in place for informing consumers by a variety of means, including email and text messaging, when they are approaching their limit, have reached it, or are likely to reach it before the end of the billing period; c) the incremental cost of additional usage, beyond the basic amount, should be no more than 20% more costly on a per-gigabytes basis than the marginal cost of ISPs to provide service. (See my previous post for a suggested rate for landline connections as of early 2011). Rationale: Flat charges for significant amounts of usage per month allow consumers to use the service in innovative ways without constantly worrying about how much capacity they are using, but it must be possible for ISPs to recoup at a fair rate the cost of extra service for their heaviest users.
14. Short-term access at fair prices: Access to the Internet sold on a per-minute or per-hour basis, such as at WiFi hotspots or hotels, should be no more than three times more expensive, per unit time, than an equivalent service sold on a monthly basis, although a fixed account setup fee or per-computer setup charge may be levied. Rationale: Hotel or hotspot access varies from free to egregiously expensive. It varies a lot from country to country too. For example, if an ISP would provide 100 GB for 30 days service to a resident for $30, then it would seem reasonable that if the ISP sets up a hotspot, a traveller should be able to access 3GB of data over the course of a day for $3.
15. Reasonable roaming charges: Access to the Internet using 'roaming' where billing is to be to a home ISP or wireless provider should cost no more than twice what either the home ISP or the remote ISP would charge for local users. This applies to wireless data roaming, and also roaming to WiFi hotspots. Rationale: Roaming charges are obscenely expensive. They can be hundreds of times what non-roaming charges would be, even though the marginal cost of providing roaming is minimal. For a connected, intelligent society, this kind of pricing abuse has to stop.
16. Commitment to build capacity to meet excellence in service levels: It is common for ISPs, especially wireless ones, to be overwhelmed by traffic volumes, such that service becomes poor. An ISP should commit to detecting all sections of their network that regularly suffer from congestion and building capacity such that congestion in those sections is lowered within 8 months. Rationale: If customers are paying for service and experience network slowdowns, they are not getting the service they have paid for.
17. Wholesale service at reasonable prices: Landline-owning ISPs (with copper wires, fibers or cables) must sell their services on a wholesale basis so other ISPs can offer competing services. The wholesale arrangement should be such that consumers can choose the ISP they deal with to obtain service. Furthermore, the landline owner must guarantee sufficient bandwidth on segments of the network they control such that the ISPs can independently follow all the policies in the above document. Rationale: It is required in Canada, for example, that phone companies provide the lines on a wholesale basis to other ISPs, but cable TV companies are not under any obligation. The arguments against this are that it is a disincentive for landline owners to build their landline service. However, wholesale phone, electricity and natural gas distribution is widespread, and infrastructure investments continue to be made. Wholesale phone and long distance pricing caused precipitous price reductions when introduced many years ago.
18: Transparency in all aspects of service: The methods and extent to which all of the above items are being adhered to should be prominently published on each ISP's website.
Due to the monopoly/duopoly situation, it is my expectation that some form of government regulation will always be required. Note that in the EU there are several directives providing for a form of Internet User Bill of Rights. The Ofcom regulator in the UK regulates the open Internet market much more than it is regulated in the US or Canada. The FCC in the US is trying to establish a compromise on net neutrality, but the charter I have presented above covers considerably wider ground. For more on the FCC rules, go to this website, and look at the items under the heading 12-23-10.
I think Internet service should be subject to more competition and the providers should be held to high standards. Consumers should know they will always benefit from net neutrality, have fair and competitively priced services, and be able to transparently see information about all aspects of their service.
In the following I consider small business owners to also be consumers, provided their main business is not network services. By 'quality of service' I am referring to the combination of factors such as the throughput (amount of data per unit time), latency (delay in having data sent or received), and jitter (variability in that delay). By ISPs, I include not just landline, but also wireless and satellite providers, unless I state otherwise.
As a consumer of Internet service here's what I would like to see all ISPs provide:
1. Full access to the Internet: A consumer should be able to access any server on any IP address on the Internet, should be able to use any DNS server to resolve any valid domain, and should be able to use any protocol supported by both the server and the Internet Engineering Task Force. Exceptions to this can only be made with the consumer's explicit consent, such as to protect children from inappropriate content, to prevent accidental access to dangerous sites like those that perform phishing. Rationale: Allowing ISPs to judge the appropriateness of Internet sites gives them too much power. There are certainly dangerous and illegal sites out there, but there are alternatives, such as security software and education measures, that are better than turning the ISP into a branch of the police.
2. Fair connection to all services. An ISP should not provide unfavourable routing or quality of service to any particular server or protocol on the Internet. It may provide faster direct connections or caching to popular services in order to speed up its overall service, but in making such arrangements with popular services, payments from services must be arranged only on a cost-recovery basis, and quality of services to other services must not be degraded. Any such arrangements with popular services must be disclosed to all consumers. Rationale: Consumers should be able to choose the services they access, rather than having to put up with what the ISP has chosen to favor.
3. Fair competition in value-added services: An ISP may not charge extra for routing Internet services of any kind to an external provider; this includes VOIP telephony, video services, tethering of one device to another (see my earlier post), and peer-to-peer services. An ISP may provide such services itself at extra cost, but if it does so, then the access to these services must also be available at no extra cost from anywhere on the Internet, and there must not be a reduction in price due to 'bundling' of more than 20% on any such service. Rationale: ISPs want to provide value added content however it is anti-competitive for them to make it faster or cheaper for consumers to choose their particular brand of content.
4. Special service quality for some doesn't affect others: A consumer may make arrangements to pay extra for particular quality of service for a particular application, such as guaranteed low latency and high bandwidth for a telemedicine application. However capacity for such services must be built and reserved separately such that quality of service for other consumers cannot decline, no matter how intensively this special service is used.
5. Throttling is transparent and kept to a minimum: An ISP may limit (throttle) the maximum bandwidth of a consumer's connection only if the following conditions are met:
a) The consumer's connection to the ISP is shared (wireless or shared cable for example);
b) The capacity of the connection is near its maximum;
c) The consumers subject to throttling are using substantially more capacity than many other users and it is occurring over an extended period of time;
d) The throttling will only be to the degree necessary and continue only as long as necessary to ensure users with lesser demands on the network can obtain good quality of service;
e) The throttling will applied gradually, not in a drastic manner (i.e. reducing maximum bandwidth by 10%, then 20%, then 30%, etc. as necessary);
f) The consumer is notified about the exact policy for throttling, and can opt to be notified whenever throttling is initiated, by methods such as email or text messaging;
g) Limitations on bandwidth are fairly applied to all of the user's connections, and protocols;
h) The consumer can at all times query the ISP to find out the exact amount of throttling currently in effect, and the amount applied at any given time over at least the last two billing cycles.
Rationale: Some people object outright to throttling, however all network connections to some extent involve sharing of a limited resource. The sum total of every customer's potential maximum throughput will always exceed the network's capacity. Therefore the ability to slow down the heaviest users in a fair way on an occasional basis is a necessary evil. Other utilities are also subject to throttling: In a drought, watering of lawns and filling of pools is limited. If a transmission line goes down, people will be asked to limit consumption or else suffer rolling blackouts. The criteria listed make throttling transparent, which it currently is not. This transparency will force ISPs to build capacity such that throttling becomes unnecessary.
6. Strictly limited packet inspection: An ISP must not perform packet inspection other than, a) to determine the protocol and routing of the user's communication for the purposes described in this document; b) to detect malware or hacking; c) under court order. Rationale: Deep packet inspection is intrusive and is akin to reading postal mail or wiretapping of traditional phone lines. Even the postal service would open a package to help determine where to deliver it to if the mailing label fell off, and would set aside packages that seemed dangerous.
7. No data modification without opt-in: An ISP must not modify packets or protocols, such as to place advertisements on web pages or to providing search pages when a DNS lookup fails, unless prior arrangements have been made with the consumer to allow this to happen. Such modification must only take place when the consumer explicitly opts in, and the consumer must be able to easily opt out at no cost at any time. Rationale: More and more ISPs deliver search pages and ads when a DNS lookup fails. Others place ads on web pages by hijacking parts of the html. This is unacceptable without explicit consent.
8. Free choice of anti-virus or security software: If an ISP provides at no charge its own custom or branded software for virus protection and other aspects of security, it must also provide deep discounts on other brands of similar software: Rationale: This type of software doesn't just benefit the consumer, it benefits the ISP too. However, consumers get locked into the ISP's brand, which may not work on all operating systems, and may double as a tool for selling bundled value-added services. Consumers need to be free to say no to the ISP's software, without feeling that they then must pay full price for something they could have got free.
9. No bloatware: An ISP must not require or even strongly suggest to users to install software provided by the ISP, nor must it partner with hardware providers to have such 'bloatware' pre-installed.
10. Free choice of hardware: An ISP must allow and facilitate consumers to use of their own modem and wireless network hardware, and must not price their own hardware at rates that unduly coerce consumers to choose their own brand. The only exception would be hardware that damages the network.
11. Stable IP Address: An ISP should refrain, where possible, from changing a user's IP address. Where this is not possible, the consumer must be able to determine the policies for when IP addresses will be changed. Rationale: Many ISPs rotate IP addresses, to make it harder for consumers to run servers and to make it harder for attackers to attack a particular consumer. However, there are unfortunate unintended consequences: Often websites that have been attacked will blacklist a particular IP address. When this is rotated to some other unsuspecting consumer, that new consumer will be denied services. Also long-lived client connections can crash as can home routers. Finally, as the next item points out, consumers should be able to run servers.
12. Servers allowed on a limited basis: Consumers should be able to run small-scale web and other servers as long as the total bandwidth of connections to servers does not exceed 10% of the consumer's allowed bandwidth on landlines, and 5% on wireless and connections. Excess usage would be subject to throttling in the manner described above. Rationale: The days when upload capacity was minuscule are long past except in satellite connections. Allowing people to run small-scale servers will make the Internet more open, as it was intended to be, and will foster innovation.
13. Usage limits with reasonable charges for overages: Internet access may be sold with soft limits on total usage per billing cycle, only if the following is respected: a) The consumer should be able to see their history of usage, and their usage limit; c) A mechanism must be in place for informing consumers by a variety of means, including email and text messaging, when they are approaching their limit, have reached it, or are likely to reach it before the end of the billing period; c) the incremental cost of additional usage, beyond the basic amount, should be no more than 20% more costly on a per-gigabytes basis than the marginal cost of ISPs to provide service. (See my previous post for a suggested rate for landline connections as of early 2011). Rationale: Flat charges for significant amounts of usage per month allow consumers to use the service in innovative ways without constantly worrying about how much capacity they are using, but it must be possible for ISPs to recoup at a fair rate the cost of extra service for their heaviest users.
14. Short-term access at fair prices: Access to the Internet sold on a per-minute or per-hour basis, such as at WiFi hotspots or hotels, should be no more than three times more expensive, per unit time, than an equivalent service sold on a monthly basis, although a fixed account setup fee or per-computer setup charge may be levied. Rationale: Hotel or hotspot access varies from free to egregiously expensive. It varies a lot from country to country too. For example, if an ISP would provide 100 GB for 30 days service to a resident for $30, then it would seem reasonable that if the ISP sets up a hotspot, a traveller should be able to access 3GB of data over the course of a day for $3.
15. Reasonable roaming charges: Access to the Internet using 'roaming' where billing is to be to a home ISP or wireless provider should cost no more than twice what either the home ISP or the remote ISP would charge for local users. This applies to wireless data roaming, and also roaming to WiFi hotspots. Rationale: Roaming charges are obscenely expensive. They can be hundreds of times what non-roaming charges would be, even though the marginal cost of providing roaming is minimal. For a connected, intelligent society, this kind of pricing abuse has to stop.
16. Commitment to build capacity to meet excellence in service levels: It is common for ISPs, especially wireless ones, to be overwhelmed by traffic volumes, such that service becomes poor. An ISP should commit to detecting all sections of their network that regularly suffer from congestion and building capacity such that congestion in those sections is lowered within 8 months. Rationale: If customers are paying for service and experience network slowdowns, they are not getting the service they have paid for.
17. Wholesale service at reasonable prices: Landline-owning ISPs (with copper wires, fibers or cables) must sell their services on a wholesale basis so other ISPs can offer competing services. The wholesale arrangement should be such that consumers can choose the ISP they deal with to obtain service. Furthermore, the landline owner must guarantee sufficient bandwidth on segments of the network they control such that the ISPs can independently follow all the policies in the above document. Rationale: It is required in Canada, for example, that phone companies provide the lines on a wholesale basis to other ISPs, but cable TV companies are not under any obligation. The arguments against this are that it is a disincentive for landline owners to build their landline service. However, wholesale phone, electricity and natural gas distribution is widespread, and infrastructure investments continue to be made. Wholesale phone and long distance pricing caused precipitous price reductions when introduced many years ago.
18: Transparency in all aspects of service: The methods and extent to which all of the above items are being adhered to should be prominently published on each ISP's website.
Due to the monopoly/duopoly situation, it is my expectation that some form of government regulation will always be required. Note that in the EU there are several directives providing for a form of Internet User Bill of Rights. The Ofcom regulator in the UK regulates the open Internet market much more than it is regulated in the US or Canada. The FCC in the US is trying to establish a compromise on net neutrality, but the charter I have presented above covers considerably wider ground. For more on the FCC rules, go to this website, and look at the items under the heading 12-23-10.
Wednesday, February 16, 2011
Final Jeopardy: Our computer overlords are here
This is the my final of three posts on the historic match of computer vs. human in the game of Jeopardy. The first two posts are here and here. I will also speculate below on when and how computing power of Watson's caliber, and beyond, will find its way into our daily lives.
So Watson won the IBM Challenge. A lot of us expected it, especially after Watson started dominating in the previous round. It was great to see Ken Jennings surge midway through the final round; I am a little disappointed, however, that he didn't make a larger wager on Final Jeopardy. He wouldn't have won, but it would have been closer.
Watson had quite a few wrong responses today, most particularly in a category in which the answers were all supposed to be the names of keys on a computer keyboard. There are two things interesting about this; the first is that Watson didn't understand the category to start with (and knew it). The second was that Watson didn't learn after noticing the correct 'questions' to the first couple of 'answers'. Machine learning is actually quite an advanced aspect of AI, and I am sure that a little bit more learning mixed in to Watson's many algorithms might have made a difference. As with the interesting types of mistakes made in the previous days, I am sure the engineers would try to fix this, if there was to be another series in a few months time.
I wonder if there will ever be any more Jeopardy games pitting humans against powerful computers? Now a computer has won, it won't be so much of a sensation. However, it might be nice to have computer vs. computer matches every now and then. The computer's goofs and quirks (such as betting odd amounts) are still pretty interesting to watch, so I am sure there would be an audience for this.
There should be a competition to come up with ever more tricky games that humans can figure out but computers still have trouble with.
Within a couple of decades, I expect we will see computers of Watson's capabilities on everybody's desktop, and a few years beyond that, in everybody's phone. At that point in time, you will be able to buy Jeopardy software to play against, and you will be able to handicap it to match your skill level. Incidentally, I don't think this will all come about simply by computers getting faster and faster. There will be considerable speed increase, but most of the gains will come from a combination of increases in the amount of memory available, massive parallelism, architectural innovation and algorithm innovation. Perhaps even quantum computing may be part of it, and certainly spintronics optical connects and other yet-to-be-invented concepts will likely be part of the mix. I think hardware elements that emulate neurons might well be involved, and this will open up a whole new area for software. Certainly we will move from electronics on 2-D chips to fully 3-D structures; a lot of work will have to go into using energy super-efficiently so these do not overheat.
With Watson on the desktop, the supercomputers of the coming age will likely be mind-blowing in their capabilities. Hopefully they will routinely be put to uses that will have major benefits for society: Quickly inventing and testing new drugs in a fully simulated environment; ever more accurate forecasting of weather and climate change; optimizing traffic flows through cities, and maybe even helping humans negotiate lasting solutions to social problems and conflict situations.
For all of this to come about, we need a continual supply of bright students entering the fields of computer science, software engineering, computer engineering and electrical engineering. There is a shortage of such people now in the Western World, even though the opportunities in the years to come are immense.
Incidentally, in case you are wondering, the subtitle of this post is a quote from contestant Ken Jennings that he wrote as a humorous addendum to his Final Jeopardy answer. I actually think that there is a 70-80% chance that we won't ever have actual computer overlords. I am optimistic that we will be able to evolve technology in such a manner that we remain the masters of the computers, even though the latter may approach sentience. That said, we must heed the warnings in many science fiction stories, and realize that powerful computers pose a huge risk, just like many other technologies humans have developed.
So Watson won the IBM Challenge. A lot of us expected it, especially after Watson started dominating in the previous round. It was great to see Ken Jennings surge midway through the final round; I am a little disappointed, however, that he didn't make a larger wager on Final Jeopardy. He wouldn't have won, but it would have been closer.
Watson had quite a few wrong responses today, most particularly in a category in which the answers were all supposed to be the names of keys on a computer keyboard. There are two things interesting about this; the first is that Watson didn't understand the category to start with (and knew it). The second was that Watson didn't learn after noticing the correct 'questions' to the first couple of 'answers'. Machine learning is actually quite an advanced aspect of AI, and I am sure that a little bit more learning mixed in to Watson's many algorithms might have made a difference. As with the interesting types of mistakes made in the previous days, I am sure the engineers would try to fix this, if there was to be another series in a few months time.
I wonder if there will ever be any more Jeopardy games pitting humans against powerful computers? Now a computer has won, it won't be so much of a sensation. However, it might be nice to have computer vs. computer matches every now and then. The computer's goofs and quirks (such as betting odd amounts) are still pretty interesting to watch, so I am sure there would be an audience for this.
There should be a competition to come up with ever more tricky games that humans can figure out but computers still have trouble with.
Within a couple of decades, I expect we will see computers of Watson's capabilities on everybody's desktop, and a few years beyond that, in everybody's phone. At that point in time, you will be able to buy Jeopardy software to play against, and you will be able to handicap it to match your skill level. Incidentally, I don't think this will all come about simply by computers getting faster and faster. There will be considerable speed increase, but most of the gains will come from a combination of increases in the amount of memory available, massive parallelism, architectural innovation and algorithm innovation. Perhaps even quantum computing may be part of it, and certainly spintronics optical connects and other yet-to-be-invented concepts will likely be part of the mix. I think hardware elements that emulate neurons might well be involved, and this will open up a whole new area for software. Certainly we will move from electronics on 2-D chips to fully 3-D structures; a lot of work will have to go into using energy super-efficiently so these do not overheat.
With Watson on the desktop, the supercomputers of the coming age will likely be mind-blowing in their capabilities. Hopefully they will routinely be put to uses that will have major benefits for society: Quickly inventing and testing new drugs in a fully simulated environment; ever more accurate forecasting of weather and climate change; optimizing traffic flows through cities, and maybe even helping humans negotiate lasting solutions to social problems and conflict situations.
For all of this to come about, we need a continual supply of bright students entering the fields of computer science, software engineering, computer engineering and electrical engineering. There is a shortage of such people now in the Western World, even though the opportunities in the years to come are immense.
Incidentally, in case you are wondering, the subtitle of this post is a quote from contestant Ken Jennings that he wrote as a humorous addendum to his Final Jeopardy answer. I actually think that there is a 70-80% chance that we won't ever have actual computer overlords. I am optimistic that we will be able to evolve technology in such a manner that we remain the masters of the computers, even though the latter may approach sentience. That said, we must heed the warnings in many science fiction stories, and realize that powerful computers pose a huge risk, just like many other technologies humans have developed.
Labels:
Artificial Intelligence,
Futurism,
Jeopardy
Tuesday, February 15, 2011
Where Watson falls down - day 2
This is my second post as I follow the Jeopardy challenge between top human competitors and Watson, the IBM supercomputer. My first post discussed what happened on day 1.
Watson did fabulously compared to its human opponents today, building up a huge lead at the half-way point in the two-game series. But it made an interesting blunder in Final Jeopardy.
The category was "US cities" and the answer was the city "... whose largest airport is named after a World War II hero, and whose second largest airport is named after a WW II battle".
Watson goofed badly by giving the answer of "Toronto", although with question marks indicating uncertainty. Toronto has Pearson airport and Toronto Island Airport, often just called "Island Airport"
I thought this question was extremely easy for final Jeopardy. The Battle if Midway is so well known, so that quickly led me to the answer of Chicago. Then I remembered seeing an exhibit about Edward O'Hare while flying through O'Hare airport.
So how did Watson go wrong? It is interesting to try to make an educated guess:
It turns out there are lots of references to various people called Pearson who are war heroes, and this page, for example, calls Lester Pearson (after whom Toronto's largest airport is named) a hero.
However what I think is most important is that there are large numbers of "Island" battles (e.g. Wake Island, Aleutian Islands, etc.) recorded in World War II. It seems likely that the sheer number of such battles may have overwhelmed Watson's probabilistic analysis, even apparently overriding the category "US Cities" since Toronto is certainly not in the US!
This shows the shallow state of understanding that Watson has in certain areas. We humans know that no airport would never be actually given the generic name "Island" after any of the island battles.
Watson did fabulously compared to its human opponents today, building up a huge lead at the half-way point in the two-game series. But it made an interesting blunder in Final Jeopardy.
The category was "US cities" and the answer was the city "... whose largest airport is named after a World War II hero, and whose second largest airport is named after a WW II battle".
Watson goofed badly by giving the answer of "Toronto", although with question marks indicating uncertainty. Toronto has Pearson airport and Toronto Island Airport, often just called "Island Airport"
I thought this question was extremely easy for final Jeopardy. The Battle if Midway is so well known, so that quickly led me to the answer of Chicago. Then I remembered seeing an exhibit about Edward O'Hare while flying through O'Hare airport.
So how did Watson go wrong? It is interesting to try to make an educated guess:
It turns out there are lots of references to various people called Pearson who are war heroes, and this page, for example, calls Lester Pearson (after whom Toronto's largest airport is named) a hero.
However what I think is most important is that there are large numbers of "Island" battles (e.g. Wake Island, Aleutian Islands, etc.) recorded in World War II. It seems likely that the sheer number of such battles may have overwhelmed Watson's probabilistic analysis, even apparently overriding the category "US Cities" since Toronto is certainly not in the US!
This shows the shallow state of understanding that Watson has in certain areas. We humans know that no airport would never be actually given the generic name "Island" after any of the island battles.
The state of artificial intelligence: Where Watson falls down on Jeopardy
The Jeopardy games this week pitting top Jeopardy competitors against IBM's Watson supercomputer are fascinating. They offer glimpses into how well the field of artificial intelligence is developing. Here is my analysis, after day one.
Watson combines many technologies. Firstly it has to process natural language. It has to do this twice, once when 'reading' vast volumes of information (at this stage it can take it's time) and again when it has to process a Jeopardy 'answer' in order to respond with the correct question. The subtle nuances of language, such as puns, ambigiuties and shades of meaning pose huge challenges both for Watson and the human competitors. I think the engineers building Watson have done a terrific job in this area; Watson didn't seem to stumble too much in this task.
Secondly, Watson has to take the 'surface meaning' it obtains from processing language and build sophisticated knowledge representations in its memory. Watson excels at extracting and representing straightforward facts. But it seems to have trouble dealing what I call 'meta-knowledge' (or knowledge about knowledge) and the deep meaning the plots of stories. In one question it showed that it did not understand who the antagonist was in certain conflicts in the Harry Potter series, not correctly identifying Voldemort. I think this represents the state of the art in Artificial Intelligence (AI): We will not have a truly sentient machine until all the subtle interactions in both real-world and fictional event sequences can be deeply modelled by the machine, with correct analysis of such elements as motivation, causality and subterfuge.
Thirdly, Watson applies many different algorithms to reason with various hypotheses about possible correct responses. Again the Engineers have done a superb job here.
A lot has been made of Watson's accidental repetition of a wrong response 'the nineteen-twenties', that a human competitor had just got wrong. I think it is an unfortunate oversight that a speech-recognition feature was not added to Watson to deal with just this situation. However I believe that improving Watson so that it can correctly listen to what other competitors are saying would not be very difficult. The error Watson made, I believe relates to a combination of slightly faulty knowledge representation (issue two above) and reasoning algorithms (issue three above).
Watson is not 'strong AI' in that it doesn't reason with full mathematical rigour. It uses a lot of probabilistic matching technology. Some people have criticised this, claiming that it therefore isn't AI. However I disagree. Frankly I think that we humans mostly use this 'scruffy' kind of AI in our reasoning too.
Where do I think AI is going? I think that we are finally seeing the dawn of truly useful AI. We are starting to be able to use it to do sophisticated analysis. I think sentient machines are still some decades away, but will appear in the lifetime of people alive today. That said, forecasters of the future of AI have consistently been wrong.
There have been many blog posts and articles about how Watson works and is doing. I recommend this post by John Rennie. A sample of the show is online at YouTube.
And by the way, a plug for the University of Ottawa, my employer: Alex Trebek, Jeopardy host, is an alumnus of our University, so these episodes have special significance for the Computer Scientists and Software Engineers here.
Watson combines many technologies. Firstly it has to process natural language. It has to do this twice, once when 'reading' vast volumes of information (at this stage it can take it's time) and again when it has to process a Jeopardy 'answer' in order to respond with the correct question. The subtle nuances of language, such as puns, ambigiuties and shades of meaning pose huge challenges both for Watson and the human competitors. I think the engineers building Watson have done a terrific job in this area; Watson didn't seem to stumble too much in this task.
Secondly, Watson has to take the 'surface meaning' it obtains from processing language and build sophisticated knowledge representations in its memory. Watson excels at extracting and representing straightforward facts. But it seems to have trouble dealing what I call 'meta-knowledge' (or knowledge about knowledge) and the deep meaning the plots of stories. In one question it showed that it did not understand who the antagonist was in certain conflicts in the Harry Potter series, not correctly identifying Voldemort. I think this represents the state of the art in Artificial Intelligence (AI): We will not have a truly sentient machine until all the subtle interactions in both real-world and fictional event sequences can be deeply modelled by the machine, with correct analysis of such elements as motivation, causality and subterfuge.
Thirdly, Watson applies many different algorithms to reason with various hypotheses about possible correct responses. Again the Engineers have done a superb job here.
A lot has been made of Watson's accidental repetition of a wrong response 'the nineteen-twenties', that a human competitor had just got wrong. I think it is an unfortunate oversight that a speech-recognition feature was not added to Watson to deal with just this situation. However I believe that improving Watson so that it can correctly listen to what other competitors are saying would not be very difficult. The error Watson made, I believe relates to a combination of slightly faulty knowledge representation (issue two above) and reasoning algorithms (issue three above).
Watson is not 'strong AI' in that it doesn't reason with full mathematical rigour. It uses a lot of probabilistic matching technology. Some people have criticised this, claiming that it therefore isn't AI. However I disagree. Frankly I think that we humans mostly use this 'scruffy' kind of AI in our reasoning too.
Where do I think AI is going? I think that we are finally seeing the dawn of truly useful AI. We are starting to be able to use it to do sophisticated analysis. I think sentient machines are still some decades away, but will appear in the lifetime of people alive today. That said, forecasters of the future of AI have consistently been wrong.
There have been many blog posts and articles about how Watson works and is doing. I recommend this post by John Rennie. A sample of the show is online at YouTube.
And by the way, a plug for the University of Ottawa, my employer: Alex Trebek, Jeopardy host, is an alumnus of our University, so these episodes have special significance for the Computer Scientists and Software Engineers here.
Labels:
Artificial Intelligence,
Futurism,
Jeopardy
Keep credit card pricing simple: Neither consumers nor merchants should be forced to pay more
Currently in Canada there is a controversy surrounding what merchants are charged by Visa and MasterCard to process credit cards, and in particular the higher charges for 'premium' cards. As this article points out, the head of the Competition Bureau thinks that it is ridiculous for the credit card-companies to think they are being pro-consumer.
I agree, but I think some issues in this controversy are being missed in the news reports.
The problem is that Visa and MasterCard have 'premium' lines of credit cards that typically give consumers high-valued benefits. That is fine. The problem however, is threefold: 1) Merchants have to pay higher fees when consumers purchase with these cards; 2) Merchants are not allowed to opt-out of these cards; if they accept Visa at all, they must accept the high-fee and low-fee ones; 3) This fact is largely hidden from consumers; all consumers see is the 'Visa' or 'MasterCard' brand.
Everybody knows that American Express often charges more. But merchants don't have to accept Amex, and consumers know that fewer merchants do so.
To me there are three key rules to solve this controversy:
1. If Visa or Mastercard want to have premium brands, they should not call them Visa and Mastercard, and merchants should be able to choose whether or not to accept the separately-named premium brands, just like they can choose not to accept American Express. Separate naming is essential so that a merchant can clearly on their door which brands they accept, and which brands they do not accept.
2. Merchants should continue to not be allowed to charge consumers more for using credit cards. The 'price you see is the price you pay' is a great simplification for consumers and has other benefits I will discuss below. I think this should apply to premium credit cards too; allowing merchants to charge more just for these (while not charging excess fees for 'regular' cards) would result in a lot of confusion in the market.
Many merchants like rule 1, but not rule 2. Visa and Mastercard like Rule 2 but not rule 1. To properly serve consumers, both rules need to be in place.
Some would argue against rule 2 on the grounds that consumers can always use cash. However if all of a sudden consumers had to pay a few percent more to use a credit card, many would stop using them as much. I am sure I would be one of the many to considerably reduce my use of cards; I almost never use debit cards myself now since it costs more than both credit cards and cash.
I think such reduced use of credit cards would have several unintended consequences:
And while we are at it, lets make it such that paying by debit card also doesn't cost any more for consumers than paying cash. I am fairly convinced that the cost of running an electronic payments network must in fact be cheaper than running the 'physical cash' network, and the fees for the latter are paid by the merchants, the banks (which charge the merchants just like they charge the merchants for the debit card system) and, yes the taxpayers (since the government pays for the operation of the Bank of Canada, the mint, etc.)
I agree, but I think some issues in this controversy are being missed in the news reports.
The problem is that Visa and MasterCard have 'premium' lines of credit cards that typically give consumers high-valued benefits. That is fine. The problem however, is threefold: 1) Merchants have to pay higher fees when consumers purchase with these cards; 2) Merchants are not allowed to opt-out of these cards; if they accept Visa at all, they must accept the high-fee and low-fee ones; 3) This fact is largely hidden from consumers; all consumers see is the 'Visa' or 'MasterCard' brand.
Everybody knows that American Express often charges more. But merchants don't have to accept Amex, and consumers know that fewer merchants do so.
To me there are three key rules to solve this controversy:
1. If Visa or Mastercard want to have premium brands, they should not call them Visa and Mastercard, and merchants should be able to choose whether or not to accept the separately-named premium brands, just like they can choose not to accept American Express. Separate naming is essential so that a merchant can clearly on their door which brands they accept, and which brands they do not accept.
2. Merchants should continue to not be allowed to charge consumers more for using credit cards. The 'price you see is the price you pay' is a great simplification for consumers and has other benefits I will discuss below. I think this should apply to premium credit cards too; allowing merchants to charge more just for these (while not charging excess fees for 'regular' cards) would result in a lot of confusion in the market.
Many merchants like rule 1, but not rule 2. Visa and Mastercard like Rule 2 but not rule 1. To properly serve consumers, both rules need to be in place.
Some would argue against rule 2 on the grounds that consumers can always use cash. However if all of a sudden consumers had to pay a few percent more to use a credit card, many would stop using them as much. I am sure I would be one of the many to considerably reduce my use of cards; I almost never use debit cards myself now since it costs more than both credit cards and cash.
I think such reduced use of credit cards would have several unintended consequences:
- With people carrying around lots more cash it would increase the incentive for crime, including counterfeiting, consumer pickpocketing and merchant robberies.
- People would buy less on credit, which may be a good thing for those with debt problems, but it would likely serve to slow down the economy.
- Merchants would have higher cash-handling costs. Remember that the costs to safely store and count cash are not zero for merchants. In fact, a cashless society where everything is done by electronically should cut merchant's administrative costs quite a bit.
And while we are at it, lets make it such that paying by debit card also doesn't cost any more for consumers than paying cash. I am fairly convinced that the cost of running an electronic payments network must in fact be cheaper than running the 'physical cash' network, and the fees for the latter are paid by the merchants, the banks (which charge the merchants just like they charge the merchants for the debit card system) and, yes the taxpayers (since the government pays for the operation of the Bank of Canada, the mint, etc.)
Monday, February 14, 2011
The ridiculous fashion for limiting salaries of politicians: Pay them properly so politics can become a more respected profession
I had already drafted this blog post on how I think politicians should be paid when I read this article in today's Globe and Mail on how Toronto councillors turned down this year's increase. Way to go Toronto! If you want to avoid attracting skilled people as your future councillors, this is exactly what you should be doing.
We need good people as our elected officials. Highly educated and intelligent people are able to command high salaries as a reward for skills that are in high demand, responsibility for managing a large enterprise or in compensation for extended periods of education where they received low pay and may have amassed high debts (e.g the medical profession, or university professors).
High achievers, however, are unlikely to want to run for public office because in general they will have to take a pay cut that will severely impact themselves and their families. Accomplished people in addition have to consider the stress of public life; the exposure to ridicule, the risk of not being re-elected, and in the case of our provincial and national leaders, the need to travel to a distant parliament and potentially live away from their families for long periods.
Taken all together, this argues for our political representatives receiving far higher pay than they have right now. I would argue that a city councilor in a big city like Toronto or Ottawa should command a salary on the order of $150,000 per year, whereas they currently typically receive less than $100,000. A member of a provincial or federal parliament should command somewhat more because of the need to be away from home so much (perhaps $160,000 + and an extra $5000 for every 1000km that the farthest point of their electoral district is from the Parliament). After the above reform, political salaries should then just be indexed to the lower of a) the cost of living, and b) the average increase in the civil service or city staff. Laws should then be passed that would make it very difficult to meddle with the salaries in future.
I am not opposed to a zero-percent increase for politicians some years, if that is what the staff are getting, and if the base salary is already at a reasonable level. The problem is that it is ridiculous for full-time politicians to be routinely paid less than senior people in the agencies that the politicians ultimately govern.
Stipends above the base salaries for executive politicians (ministers, mayors, etc.) should probably be performance-based. There should be a reduced or no stipend if certain measures are not acheieved, such meeting zero-deficit targets, service levels, unemployment levels, etc.
It is considered unethical in many professions to systematically under-price skilled services. The Code of Ethics of Professional Engineers Ontario (of which I am a member) makes it clear that the principle of adequate compensation must be upheld. Adequate compensation not only attracts and keeps accomplished people, but helps command respect for the profession. This does not mean that altrusitic or bro-bono service is forbidden, just that it shouldn't become the norm for basic work of the profession.
We need good people as our elected officials. Highly educated and intelligent people are able to command high salaries as a reward for skills that are in high demand, responsibility for managing a large enterprise or in compensation for extended periods of education where they received low pay and may have amassed high debts (e.g the medical profession, or university professors).
High achievers, however, are unlikely to want to run for public office because in general they will have to take a pay cut that will severely impact themselves and their families. Accomplished people in addition have to consider the stress of public life; the exposure to ridicule, the risk of not being re-elected, and in the case of our provincial and national leaders, the need to travel to a distant parliament and potentially live away from their families for long periods.
Taken all together, this argues for our political representatives receiving far higher pay than they have right now. I would argue that a city councilor in a big city like Toronto or Ottawa should command a salary on the order of $150,000 per year, whereas they currently typically receive less than $100,000. A member of a provincial or federal parliament should command somewhat more because of the need to be away from home so much (perhaps $160,000 + and an extra $5000 for every 1000km that the farthest point of their electoral district is from the Parliament). After the above reform, political salaries should then just be indexed to the lower of a) the cost of living, and b) the average increase in the civil service or city staff. Laws should then be passed that would make it very difficult to meddle with the salaries in future.
I am not opposed to a zero-percent increase for politicians some years, if that is what the staff are getting, and if the base salary is already at a reasonable level. The problem is that it is ridiculous for full-time politicians to be routinely paid less than senior people in the agencies that the politicians ultimately govern.
Stipends above the base salaries for executive politicians (ministers, mayors, etc.) should probably be performance-based. There should be a reduced or no stipend if certain measures are not acheieved, such meeting zero-deficit targets, service levels, unemployment levels, etc.
It is considered unethical in many professions to systematically under-price skilled services. The Code of Ethics of Professional Engineers Ontario (of which I am a member) makes it clear that the principle of adequate compensation must be upheld. Adequate compensation not only attracts and keeps accomplished people, but helps command respect for the profession. This does not mean that altrusitic or bro-bono service is forbidden, just that it shouldn't become the norm for basic work of the profession.
Labels:
Canadian-Politics,
Economics,
Politics,
Pricing
Wednesday, February 9, 2011
What is the point of the Canadian Senate: Either make it a council of eminent persons or abolish it
The Canadian Senate currently consists of members appointed by the Prime Minister. Reform is needed, but the question is, what type of reform? I would be open to seeing it abolished entirely, however below I suggest criteria for improving its function even if it remains an appointed body. I certainly don't want to see it become elected: If you make it elected, then you will end up with a powerful house, just like the US Senate, that may at times oppose the will of both the leader of Government (the prime minister) and the House of Commons and will certainly hamper the ability to pass sensible legislation.
It seems to me that the concept of an Upper House is to have highly respected people, i.e. ‘elders’ (or historically ‘nobles’) provide a safety net to ensure that ill-thought-out legislation with unintended consequences does not become law. A Senate should be able to pass such laws back for further consideration to the Lower House. To perform this role, Senators need to be respected people with experience. They can’t be people who have to worry about getting elected at the next election, since that defeats the whole purpose.
Current proposals to make the Canadian Senate elected will result in there being no appreciable difference between the two Houses of Parliament. Senators will have a ‘mandate’ for action, which may conflict with the mandate o the Government in the House of Commons. Furthermore elected Senators are likely to have the same shortsighted tendency to think only about things that will bolster their popularity in time for the next election.
I see only two courses of action that make any sense: The first is to truly make the Senate a chamber of sober second-thought by means-testing members: They must be people with long records of achievement, such as industrial or non-profit leadership positions, or else professional or academic qualifications with a long period of practice. To ensure that democracy prevails and the prime-minister-of-the day can’t stack the Senate with people of his political stripe, it would make sense that every senator should be approved by a two-thirds majority of the members of the elected House of Commons, perhaps for 6-year renewable terms. The prime minister would then be forced to suggest people he knows would be palatable to the opposition. I also think the mandatory retirement age should be abolished. There are many older people who have a lot to contribute; the 6-year term would ensure that Senators whose abilities decline would not be reappointed.
The other course of action is to abolish the Senate. After all, it has almost always rubber stamped bills. The only time recently it didn’t, it defeated the Climate Change Accountability Act without any debate, purely using procedural trickery.
I certainly don’t agree with what the current Conservative government wants to do. Nor do I agree with the ‘triple-E’ concept (equal, elected and effective). In a country with provinces so widely varying in size and population, I think it pointless to try to balance membership any more than what is currently provided for in the constitution. As discussed above, I don’t agree with ‘elected’. The only thing I do agree with is ‘effective’, if that can be achieved.
It seems to me that the concept of an Upper House is to have highly respected people, i.e. ‘elders’ (or historically ‘nobles’) provide a safety net to ensure that ill-thought-out legislation with unintended consequences does not become law. A Senate should be able to pass such laws back for further consideration to the Lower House. To perform this role, Senators need to be respected people with experience. They can’t be people who have to worry about getting elected at the next election, since that defeats the whole purpose.
Current proposals to make the Canadian Senate elected will result in there being no appreciable difference between the two Houses of Parliament. Senators will have a ‘mandate’ for action, which may conflict with the mandate o the Government in the House of Commons. Furthermore elected Senators are likely to have the same shortsighted tendency to think only about things that will bolster their popularity in time for the next election.
I see only two courses of action that make any sense: The first is to truly make the Senate a chamber of sober second-thought by means-testing members: They must be people with long records of achievement, such as industrial or non-profit leadership positions, or else professional or academic qualifications with a long period of practice. To ensure that democracy prevails and the prime-minister-of-the day can’t stack the Senate with people of his political stripe, it would make sense that every senator should be approved by a two-thirds majority of the members of the elected House of Commons, perhaps for 6-year renewable terms. The prime minister would then be forced to suggest people he knows would be palatable to the opposition. I also think the mandatory retirement age should be abolished. There are many older people who have a lot to contribute; the 6-year term would ensure that Senators whose abilities decline would not be reappointed.
The other course of action is to abolish the Senate. After all, it has almost always rubber stamped bills. The only time recently it didn’t, it defeated the Climate Change Accountability Act without any debate, purely using procedural trickery.
I certainly don’t agree with what the current Conservative government wants to do. Nor do I agree with the ‘triple-E’ concept (equal, elected and effective). In a country with provinces so widely varying in size and population, I think it pointless to try to balance membership any more than what is currently provided for in the constitution. As discussed above, I don’t agree with ‘elected’. The only thing I do agree with is ‘effective’, if that can be achieved.
Subscribe to:
Posts (Atom)