[cwn] Attn: Development Editor, Latest Caml Weekly News

Alan Schmitt alan.schmitt at polytechnique.org
Tue Jan 23 01:10:20 PST 2007


Hello,

Here is the latest Caml Weekly News, for the week of January 16 to  
23, 2007.

1) ocaml+twt v0.90
2) ICFP07 Call for Papers
3) Polymorphic Variants
4) glome-0.2
5) Final Call for Papers: TFP 2007, New York, USA
6) json-static: syntax magic for JSON
7) Time stamp module floating around?
8) native-code stack backtraces
9) color on linux terminal
10) va_arg values
11) function definition

========================================================================
1) ocaml+twt v0.90
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
e856d73ad84a1241/a41261dba43cb98d#a41261dba43cb98d>
------------------------------------------------------------------------
** Mike Lin announced:

I just posted a new version of ocaml+twt, a preprocessor that lets  
you use
indentation to avoid multi-line parenthesization (like Python or  
Haskell).

<http://people.csail.mit.edu/mikelin/ocaml+twt>

This version introduces a major backwards-incompatible change: the
eradication of "in" from let expressions, and the need to indent the let
body (as suggested by the F# lightweight syntax). This reduces the
familiar phenomenon of long function bodies getting progressively more
indented as they go along. That is, before where you had:

let x = 5 in
   printf "%d\n" x
   let y = x+1 in
     printf  "%d\n" y

You'd now just write:

let x = 5
printf "%d\n" x
let y = x+1
printf "%d\n" y

I was hesitant to introduce this feature because it's extra hackish in
implementation (even moreso than the rest of this house of cards). It  
also
removes some programmer freedom, because you cannot have the let body  
on the
same line as the let, and you cannot have a statement sequentially  
following
the let, outside the scope of the binding. But after playing with it, I
think it is worthwhile. Please let me know what you think. I am still  
not
completely sure that I haven't broken something profound that will  
force me
to totally backtrack this change, but let's give it a try. I will  
obviously
keep the 0.8x versions around for those who prefer it and for  
existing code
(including a lot of my own).

Standard disclaimer: ocaml+twt is a flagrant, stupendous,
borderline-ridiculous hack, but it works quite well, I write all my  
new code
using it, and I recommend it if you like this style. On the other  
hand, if
someone with more free time and knowledge of camlp4 wants to step up,  
I have
a couple ideas about how you might do it right...
			
========================================================================
2) ICFP07 Call for Papers
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
4e85691ee614911f/5cca6b53cb617ad1#5cca6b53cb617ad1>
------------------------------------------------------------------------
** Matthew Fluet announced:

                              Call for Papers
       ICFP 2007: International Conference on Functional Programming
                    Freiburg, Germany, 1-3 October 2007
ICFP 2007 seeks original papers on the art and science of functional
programming. Submissions are invited on all topics from principles to
practice, from foundations to features, from abstraction to application.
The scope includes all languages that encourage functional programming,
including both purely applicative and imperative languages, as well as
languages with objects and concurrency. Particular topics of interest
include
   * Applications and domain-specific languages: systems programming;
     scientific and numerical computing; symbolic computing; artificial
     intelligence; databases; graphical user interfaces; multimedia
     programming; scripting; system administration; distributed- 
systems and
     web programming; XML processing; security
   * Foundations: formal semantics; lambda calculus; type theory;  
monads;
     continuations; control; state; effects
   * Design: algorithms and data structures; modules; type systems;
     concurrency and distribution; components and composition;  
relations to
     object-oriented or logic programming
   * Implementation: abstract machines; compile-time and run-time
     optimization; just-in-time compilers; memory management; parallel
     hardware; interfaces to foreign functions, services, components or
     low-level machine resources
   * Transformation and analysis: abstract interpretation; partial
     evaluation; program transformation
   * Software-development techniques: design patterns; specification;
     verification; validation; debugging; test generation; tracing;
     profiling
   * Practice and experience: novel results drawn from experience in
     education or industry
   * Functional pearls: elegant, instructive examples of functional
     programming

A functional pearl need not report original research results, but it  
must
be instructive, elegant, and fun.

ICFP 2007 also seeks Experience Reports. An Experience Report is a short
paper (2-4 pages) which need not present novel results, but which should
provide evidence that functional programming really works or should
describe obstacles that prevented it from working.  Detailed guidelines
appear below.

What's new this year?
~~~~~~~~~~~~~~~~~~~~~
Experienced ICFP authors may want to pay special attention to the points
below, which are new this year.
   * Double-blind review
   * Author-date citations
   * Supplemental material in a separate document,
     not appended to the main text
   * Morning deadline (but equivalent to late afternoon or early  
evening in
     many time zones of interest)
   * Experience Reports

                        Instructions for authors
                        ~~~~~~~~~~~~~~~~~~~~~~~~

By 11:00 AM Friday, 6 April 2007, Samoan time, submit an abstract of at
most 300 words and a full paper of at most 12 pages or an Experience
Report of at most 4 pages. Submissions will be accepted  
electronically, at
a URL to be named later. The deadline is set at Samoan time, so if your
submission is in by 11:00 AM Friday according to your local time,  
wherever
you are, the submission will be on time. The world clock at
<http://www.timeanddate.com/worldclock/fixedtime.html? 
month=4&day=6&year=2007&hour=11&min=0&sec=0&p1=282>
can give you the equivalent in your local time, e.g., 3:00 PM Friday
in Portland, 6:00 PM Friday in Boston, and midnight Friday in Freiburg.

The deadline is firm.

Your submission should explain its contributions in both general and
technical terms, clearly identifying what has been accomplished,
explaining why it is significant, and comparing it with previous work.
Make the technical content understandable to a broad audience.

Each submission must adhere to SIGPLAN's republication policy, which
appears in full at <http://www.acm.org/sigplan/ 
republicationpolicy.htm> .
The policy means in part that your paper may not have already  
appeared in
a journal, conference, or workshop with published proceedings; that
you may not submit substantially the same work simultaneously to ICFP
and to another venue; and that your submission must discuss any
closely related material, including your own, that was previously
accepted at a journal, conference, or workshop with or without
published proceedings. Full details of the policy are available at the
SIGPLAN site. If you are in any doubt about whether this policy
applies to your paper, either consult the program chair in advance or
notify the chair when you submit. To do otherwise risks summary
rejection of your submission.

If your submission is accepted, you must assign copyright to ACM.
Proceedings will be published by the ACM Press.

Double-blind review
~~~~~~~~~~~~~~~~~~~
To increase confidence in the fairness and objectivity of the reviewing
process, reviewing will be double blind. Make it possible for  
reviewers to
evaluate your paper without having to know who you are. It should  
suffice
to omit your names from your submission and to avoid revealing your
identity through citation; detailed guidelines are available at
<http://icfp07.eecs.harvard.edu/blind.html> .

Formatting
~~~~~~~~~~
Your submission must be printable on US Letter sized paper and be either
PDF or PostScript that is interpretable by Ghostscript. If this
requirement is a hardship, make contact with the program chair at least
one week before the deadline.

Your submission must be at most 12 pages (4 pages for an Experience
Report), including bibliography and figures, in the standard ACM
conference format: two columns, nine-point font on a ten-point baseline,
with pages 20pc (3.33in) wide and 54pc (9in) tall, with a column  
gutter of
2pc (0.33in). (A suitable LaTeX class file is available from SIGPLAN;
see <http://www.acm.org/sigs/sigplan/authorInformation.htm> .  
Categories,
keywords, and so on are optional.) If you wish to supply material beyond
the 12-page limit, up to and including a full technical report, you may
attach a separate document to your submission, on the understanding that
reviewers are not expected to read it. (As a particular example, if you
feel that your submission should be supported by a lengthy technical
report, do not cite such a technical report on the web, since doing so
would reveal your identity. Please instead attach that report to your
submission.)  Detailed instructions for attaching supplementary  
documents
will be available on the submission web site.

The length limit is firm; submissions that do not meet these guidelines
will not be considered.

Citation
~~~~~~~~
We recommend (but do not require) that you put your citations into
author-date form. This procedure makes your paper easier to review. For
example, if you cite a result on testing as ``(Claessen and Hughes
2000)'', many reviewers will recognize the result instantly. On the  
other
hand, if you cite it as ``[4]'', even the best-informed reviewer has to
page through your paper to find the reference. By using author-date  
form,
you enable a knowledgeable reviewer to focus on content, not arbitrary
numbering of references. LaTeX users can simply use the natbib package
along with the plainnat bibliography style.

Author response
~~~~~~~~~~~~~~~
You will have a 48-hour period (11:00 23 May to 11:00 25 May 2007 Samoa
time) to read and respond to reviews. Details of the author-response
process will be available as it approaches.

                      Special categories of papers
                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In addition to research papers, ICFP solicits two kinds of papers  
that do
not require original research contributions: functional pearls, which  
are
full papers, and Experience Reports, which are limited to four pages.
Authors submitting such papers may wish to consider the following  
advice.

Functional pearls
~~~~~~~~~~~~~~~~~
To paraphrase both Jon Bentley and Richard Bird, the ideal pearl goes
beyond solid engineering into the realm of insight and creativity.  
Just as
a natural pearl grows from a grain of sand that has irritated an  
oyster, a
topnotch functional pearl should grow from a real problem that has
irritated a programmer. A pearl should be polished, elegant,  
instructive,
and entertaining. Ideally it should teach important programming  
techniques
and fundamental design principles. Past pearls have included instructive
examples of program calculation or proof, nifty presentations of old or
new data structures, and interesting applications and programming
techniques.

Papers submitted to ICFP as pearls often miss the mark, by being too
trivial, too complicated, or somehow not quite the elegant solution one
hopes for. The key to an accepted pearl is polishing. Your pearl is  
likely
to be rejected if your readers get bored, if the material gets too
complicated, if too much specialized knowledge is needed, or if the
writing is distasteful.

Richard Bird advises:
   * Throw away the rule book for writing research papers.
   * Get in quick; get out quick.
   * Be self-contained; don't go deep into related work, with lengthy
     references.
   * You are telling a story, so some element of surprise is welcome.
   * Above all, be engaging.
   * Give a talk on the pearl to non-specialists, your students, or your
     department.  If you changed the order of presentation for the talk,
     consider using the new order in the next draft.
   * Put the pearl away for a while, then take it out and polish it  
again.

Experience reports
~~~~~~~~~~~~~~~~~~

ICFP has long solicited submissions on the practice and experience of
functional programming. But reports of experience are inherently  
different
from research papers, and when judged by the criteria of scientific  
merit,
novelty, or research contribution, they have not competed well against
traditional ICFP submissions. Yet we believe that the functional-
programming community would benefit from being able to draw on and cite
the experience of others. For this reason, we have introduced the ICFP
Experience Report.

Unlike a normal ICFP paper, the purpose of an Experience Report is  
not to
add to the body of knowledge of the functional-programming community.
Rather, the purpose of an Experience Report is to help create a body of
published, refereed, citable *evidence* that functional programming  
really
works---or to describe obstacles that prevented it from working.

An Experience Report is distinguished from a normal ICFP paper by its
title, by its length, and by the criteria used to evaluate it.

   * Both in the proceedings and in any citations, the title of each
     accepted Experience Report must begin with the words
     "Experience Report", followed by a colon.

   * Experience Reports are limited in length: the suggested length is
     2 pages and the maximum length is 4 pages. Each accepted Experience
     Report will be presented at the conference, but depending on the
     number of Experience Reports and regular papers accepted,  
authors of
     Experience Reports may be asked to give shorter talks.

   * Because the purpose of Experience Reports is to enable our  
community
     to accumulate a published, citable body of evidence about the  
efficacy
     of functional programming, an acceptable Experience Report need not
     present novel results or conclusions. It is sufficient if the  
Report
     states a clear thesis and provides supporting evidence. The thesis
     must be relevant to ICFP, but it need not be novel.

     The program committee will accept or reject Experience Reports  
based
     on whether they judge the evidence to be convincing. Anecdotal
     evidence will be acceptable provided it is well argued and the  
author
     explains what efforts were made to gather as much evidence as
     possible. The committee will be especially convinced by evidence  
that
     includes *comparisons* of situations before and after the  
introduction
     or discontinuation of functional programming. Evidence drawn from a
     single person's experience may be sufficient, but more weight  
will be
     given to evidence drawn from the experience of groups of people.

Possible topics for an Experience Report include, but are not limited to

   * Insights gained from real-world projects using functional  
programming

   * Comparison of functional programming with conventional  
programming in
     the context of an industrial project or a university curriculum

   * Project-management, business, or legal issues encountered when  
using
     functional programming in a real-world project

   * Curricular issues encountered when using functional programming in
     education

   * Real-world constraints that created special challenges for an
     implementation of a functional language or for functional  
programming
     in general

An Experience Report should be short and to the point: if functional
programming worked for you in the same ways it has worked for others,  
you
need only to summarize the results---the main part of your paper should
discuss how well it worked and in what context. Most readers will not  
want
to know all the details of your project and its implementation, but  
please
characterize your project and its context well enough so that readers  
can
judge to what degree your experience is relevant to their own  
projects. Be
especially careful to highlight any unusual aspects of your project.  
Also
keep in mind that specifics about your project are more valuable than
generalities about functional programming; for example, it is more
valuable to say that your team delivered its software a month ahead of
schedule than it is to say that functional programming made your team  
more
productive.

If your paper not only describes experience but also presents new
technical results, or if your experience refutes cherished beliefs of  
the
functional-programming community, you may be better off submitting it  
as a
full paper, which will be judged by the usual criteria of novelty,
originality, and relevance. If you unsure in which category to  
submit, the
program chair will be happy to help you decide.

                           Other information
                           ~~~~~~~~~~~~~~~~~

Conference Chair
~~~~~~~~~~~~~~~~
Ralf Hinze (Universität Bonn)

Program Chair
~~~~~~~~~~~~~
Norman Ramsey
Harvard University
Cambridge, MA, 02148 USA
Email: icf... at eecs.harvard.edu
Phone: +1 617 496 8615

Mail sent to the address above is filtered for spam. If you send mail  
and
do not receive a prompt response, particularly if the deadline is  
looming,
feel free to telephone and reverse the charges.

Program Committee
~~~~~~~~~~~~~~~~~
Nick Benton (Microsoft Research)
Matthew Fluet (Toyota Technological Institute)
Jeremy Gibbons (University of Oxford)
Kevin Hammond (University of St Andrews)
Bastiaan Heeren (Utrecht University)
Graham Hutton (University of Nottingham)
Mark P. Jones (Portland State University)
Gabriele Keller (University of New South Wales)
Fabrice Le Fessant (INRIA/LIX, France)
Todd Millstein (UCLA)
Mike Sperber (DeinProgramm)
Christopher A. Stone (Harvey Mudd College)
Andrew Tolmach (Portland State University and INRIA Rocquencourt)
Janis Voigtländer (Technische Universität Dresden)
Stephanie Weirich (University of Pennsylvania)

Important Dates
~~~~~~~~~~~~~~~
Submission:       11:00 6 April 2007, Samoa time (AST)
Author response:  11:00 23 May to 11:00 25 May 2007 (AST)
Notification:     8 June 2007
Final papers due: 20 July 2007

ICFP 2007 Web Site
~~~~~~~~~~~~~~~~~~
<http://www.informatik.uni-bonn.de/~ralf/icfp07.html>

Special Issue of JFP
~~~~~~~~~~~~~~~~~~~~
Authors of the best final papers, as determined by the program  
committee,
will be invited to submit journal versions for a special issue of the
Journal of Functional Programming.
			
========================================================================
3) Polymorphic Variants
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
ddb89d2914396bfe/ed48f01f4500e7fb#ed48f01f4500e7fb>
------------------------------------------------------------------------
** Continuing the thread from last week, Jacques Garrigue said:

 > So... why actually are polymorphic variants useful? Why can't they  
simply be
 > implemented as normal, concrete (or how would you call them? ...)  
variants?

The original motivation was for the LablTk library, where some types
(index for instance) have lots of small variations. At that point
there where several options
* overloading (but ocaml loathes overloading, you could say that the
   total absence of overloading is essential to the language)
* refinement types: define a type with all constructors, and have
   restricted versions of it where only some constructors are allowed
* full polymorphism, i.e. polymorphic variants
If you eliminate the first option, then the choice is between
refinement types and polymorphic variants. Refinement types are rather
attractive: they combine precise typing with explicit declarations.
The arguments in favour of polymorphic variants are
* they were somehow easier to add, as they are an orthogonal extension
   of the type system
* one can add new cases afterwards.
* they are defined structurally: two identical definitions in two
   different modules are equivalent. This can be pretty handy at times.
* you don't need to open a module to use them: mixes nicely with ocaml
   programming style

 >From modularity considerations, it ends up being nicer to write

      type t = [`A of int |`B of bool]
      type u = [t | `C of char]
than
      type u = A of int | B of bool | C of char
      type t = u[A B]
Afterwards I discovered that, using some idioms, they also allow
extensible programs in a very concise way. (Much more concise than
objects, when they are relevant.)

 > Doesn't the use of polymorphic variants just mess up the function  
type?

What do you mean by "mess up the function type"?
If you mean that, without type annotations, types and errors become
very hard to read, this is true, but the manual explicitely encourages
to add type annotations :-)
 > I'm not orthogonally against polymorphic variants, it's just that  
I am
 > looking for an alternative concept that could be used instead...  
Maybe
 > subtyped records?

In terms of expressiveness, you can simulate polymorphic variants
using objects (which are subtyped records,) but this is often much
more verbose, and requires higher order types. I have some code using
objects (visitor pattern), recursive modules, lazyness, and private
row types, in an utterly non trivial way, just to do what can be done
by standard recursive function definitions using polymorphic variants...
			
** Tom then said and Jacques Garrigue replied:

 > > The original motivation was for the LablTk library, where some  
types
 > > (index for instance) have lots of small variations. At that point
 > > there where several options
 > > * overloading (but ocaml loathes overloading, you could say that  
the
 > >   total absence of overloading is essential to the language)

 > Is there a reason for that? Is it only hard to implement or are  
there any
 > conceptual / strategical / theoretical reasons?


There are two kinds of theoretical reasons.
The most theoretical one is about semantics: with generic overloading,
all your semantics must take types into account. This is a problem as
most theoretical studies of lambda-calculus are based on so-called
erasure semantics, where types are discarded at execution. So this
means you must design your own, more complex semantics. Not that this
reason may not apply to constructor or record overloading, if you do
not allow the semantics to change according to the type.

The other one is about type inference. Basically, overloading has a
huge impact on overloading. The direct way to handle it, using types
alone, is deeply disruptive, as it means that some programs will be
refused for lack of type information. It is very hard to define a
clear criterion on which programs should be refused, notwithstanding
internal details of the compiler. This also applies to record and
constructor overaloading. There are other approach the problem, with
constrained type variables (either Haskell's type classes or G'Caml's
generic type variables,) but of course they add complexity to the type
system. And it is not clear that they solve the problem for
constructor and record overloading: in Haskell, records fields are not
overloaded, and constructor overloading has to be done by hand for
each type, through an explicit encoding into type classes.

Both reasons have practical impact. For the first one, using erasure
semantics means that the programmer also can discard types when
understanding the runtime behaviour of a program. For the second one,
you can write code that is maximally polymorphic, without too much
fear about the impact of performance (equality is overloaded, so
it still matters...) or strange ambiguity-related error messages.

So for the time being ocaml keeps to no overloading at all.

 > > OCaml does not, as far as I know, have any structural typing for
 > > records..

Not for records, but for objects. From a type-theoretic point of view
they are just equivalent.
 > Hm... Actually, what I had in mind is nominal subtyping... similar to
 > objects, in fact, objects in C++-like languages, just that they  
have no
 > class methods.

Reading the description below, this all looks nice, independently of
the semantics limitation described above. However, you can kiss
farewell to type inference. With such an extensive overloading, you
would need type annotations all over the place.
By the way, this all looks likes the "used this feature in C++"
syndrome. Sure C++ is incredibly powerful. But it is built on rather
shaky theoretical fundations. So you can't expect to bring everything
from C++ to a new language. Why not think about new ways to solve
problems :-)
(To be completely honest, this also looks a bit like ML2000, and the
Moby language, which has a sound theoretical fundation, but never
really took off. You might want to go see for yourself.)
Jacques Garrigue

 > Now... close your eyes (but try to continue reading this ;) ) and  
imagine
 > you're in a dreamworld. You are programming in a language that has
 >   * function overloading that allows you to have
 >        length "abcd" + length [1; 2; 3]
 >   * Constructor overloading, eliminating the need of
 >        type parse_expression =
 >            Pexp_name of string
 >          | Pexp_constant of constant
 >          | Pexp_let of (pattern * parse_expression) *  
parse_expression
 >          | Pexp_apply of parse_expression * parse_expression list
 >          | Pexp_try of parse_expression * (pattern *  
parse_expression) list
 >        type typed_expression =
 >            Texp_ident of ident
 >          | Texp_constant of constant
 >          | Texp_let of (pattern * typed_expression) *  
typed_expression
 >          | Texp_apply of typed_expression * typed_expression list
 >          | Texp_try of typed_expression * (pattern *  
typed_expression) list
 >     as it can be coded as
 >        type parse_expression =
 >            Name of string
 >          | Constant of constant
 >          | ...

 >        type typed_expression =
 >            Ident of ident
 >          | Constant of constant
 >          | ...

 >   * nominal subtyping of records, with overloaded field names:
 >        type widget = {x : float; y : float; width: float; height:  
float} (*
 > top-level type *)
 >        type button = {widget | text : string }
 >        type checkbox = {button | checked : bool}
 >        type image = {widget | url : string}

 >        type vector = {x : float; y : float}
 >        type document {url : url}

 >     so subtypes could be applied to a function
 >        fun move : widget -> (float * float) -> unit

 >        let chk = {x = 0.0; y = 0.0; width = 10.0; height = 12.0;  
text =
 > "Check me!"; checked = false}
 >        move chk (3.0, 3.0)
 >     and types could be "discovered" at runtime:
 >        let draw widget =
 >          typematch widget with
 >              w : widget -> draw_box (w.x, w.y, w.height, w.width)
 >            | b : button -> draw_box (b.x, b.y, b.height, b.width);  
draw_text
 > b.text
 >            | i : image -> draw_image i.url (i.x, i.y)
 >            | ...

 > Do you think you would be "satisfied" even without polymorphic  
variants?

 > I am not saying this just for fun... I want to create a language with
 > overloading, but I kinda don't really like polymorphic variants...  
thou if
 > they turn out to be really useful, I would probably start to like  
them.

 > Any comments?
			
** skaller said and Jacques Garrigue replied:

 > > The most theoretical one is about semantics: with generic  
overloading,
 > > all your semantics must take types into account. This is a  
problem as
 > > most theoretical studies of lambda-calculus are based on so-called
 > > erasure semantics, where types are discarded at execution.
 > Can you explain this more? The kind of overloading I'm used
 > to is a matter of lookup not semantics, i.e. its just syntactic
 > sugar to save the human reader's memory.


Because you're doing some partial evaluation :-)
Namely you can view an overloaded function as a function taking extra
hidden parameters: the types of its arguments. If these types are
monomorphic, then you can immediately partially evaluate your function
to choose the right implementation. But if they are polymorphic, you
need to pass extra information around. This is what is done in Haskell
for instance, passing dictionnaries for type classes.
So an alternative view is that overloading is a type dependent
translation from the source code to an explicit version. You can only
drop the types after this translation. So this means that you cannot
understand the source code locally: you need to be aware of the type
information around it.
 > > Reading the description below, this all looks nice,  
independently of
 > > the semantics limitation described above. However, you can kiss
 > > farewell to type inference. With such an extensive overloading, you
 > > would need type annotations all over the place.
 > Felix makes this tradeoff. Types are deduced bottom up (s- 
attributes),
 > so variables and function return types are calculated, but function
 > parameters must be explicitly typed.


This works if you don't add either subtyping or return-type
overloading to the mix. Then you must also know the type of variables
and the return type of functions.
 > Even in Ocaml it is necessary to write some annotations
 > in two cases:
 > (a) hard to understand error messages from type inference

 > (b) polymorphic variants


This is not exactly the same thing, as these annotations have no
impact on the semantics. Of course, one could say: if we have the
annotation, why not use it for the semantics? But this is another
story, with different implications.
 > On the other hand some code would just be impossible
 > without overloading. For example C has
 >    char, short, int, long, long long
 >    unsigned versions
 >    float, double, long double

 > which is 13 distinct 'number' types, and I would hate
 > to have to write:

 >    1 int_add 2
 >    1L long_add 2L


Actually this is not only overloading that is used here, but also
automatic conversions. When writing numeric computations in ocaml, I
find it just as painful to have to add float_of_int, as to add the dot
after all operators. (In number of characters, float_of_int might be
the biggest...)
For overloading alone, the module system could help.
If we had the operators defined with different types in each of the
numeric type modules, then one could just choose the numeric type used
with "open Mynumtype". This is one of the reasons I pushed for having
a local "let open" construct... which can be simulated by "let
module", at a slight cost.
 > So in some cases with inference and without overloading,
 > *you need the annotations anyhow*, they just go into the
 > function names in an unprincipled manner, instead of going
 > into the function parameter types in a principled --
 > and indeed syntactically enforced -- manner.

I believe this is the strongest argument for overloading: that it
allows using types in a principled way. Of course, whether this is
really principled depends on the code you write...
			
** Christophe TROESTLER said and Jacques Garrigue answered:

 > > [...] I have some code using objects (visitor pattern), recursive
 > > modules, lazyness, and private row types, in an utterly non trivial
 > > way, just to do what can be done by standard recursive function
 > > definitions using polymorphic variants...
 > Sounds like a good case to see & learn the power of polymorphic
 > variants in action.  Are the two codes available somewhere (if
 > possible with some explanations ?) or are you simply referring to  
your
 > paper "Code reuse through polymorphic variants" ?


Well, I am. Other examples end up being much longer, even using
polymorphic variants :-)
But still, I include at the end of this mail the code to obtain a
fully extensible purely functional polymorphic visitor pattern.
(This is the polymorphic part that is particularly hard, even more
if you want to stay purely functional.)

For comparison, here is the equivalent complete code using polymorphic
variants. Note that there is no explicit visitor pattern: the match
construct provides it, with all the polymorphism needed.

type 'a eadd = [ `Num of int | `Add of 'a * 'a ]
let eval_add eval_rec : 'a eadd -> int = function
   | `Num i -> i
   | `Add (e1, e2) -> eval_rec e1 + eval_rec e2
let rec eval_add' e = eval_add eval_add' e
(* val eval_add' : ('a eadd as 'a) -> int *)
type 'a eaddneg = ['a eadd | `Neg of 'a]
let eval_add_neg eval_rec : 'a eaddneg -> int = function
   | #eadd as e -> eval_add eval_rec e
   | `Neg e -> - eval_rec e
let rec eval_add_neg' e = eval_add_neg eval_add_neg' e
(* val eval_add_neg' : ('a eaddneg as 'a) -> int *)
let n = eval_add_neg' (`Add (`Neg(`Add(`Num 3, `Num 2)), `Num 7))
(* val n : int = 2 *)

This also means that providing a method returning a polymorphic
variant describing the contents of an object is probably a simpler
approach to implementing the visitor pattern, even for objects. Here
is the code needed if you still want to use objects.

class type ['a] expr = object method repr : 'a end
let rec eval_add_expr (e : _ expr) = eval_add eval_add_expr e#repr
(* val eval_add_expr : ('a eadd expr as 'a) -> int *)
let rec eval_add_neg_expr (e : _ expr) = eval_add_neg  
eval_add_neg_expr e#repr
(* val eval_add_neg_expr : ('a eaddneg expr as 'a) -> int *)

Regards,

Jacques Garrigue

(* A fully extensible polymorphic visitor pattern.
    Code written with Keiko Nakata *)

class type ['a,'exp] visit_add = object
   method visitNum : int -> 'a
   method visitAdd : 'exp -> 'exp -> 'a
end

module type AddTE = sig
   type ('a,'exp) t
   type exp = < accept : 'a. ('a, exp) t -> 'a >
   val num : int -> exp
   val add : exp -> exp -> exp
end

module type AddT = sig
   include AddTE
   val eval : (int, exp) t Lazy.t
end

module AddF(X : AddT with type ('a,'exp) t = private ('a,'exp)  
#visit_add) =
struct
   type ('a, 'exp) t = ('a, 'exp) X.t
   class type exp = object ('e) method accept : 'a. ('a, 'e) t -> 'a end
   class num x = object (_ : #exp) method accept v = v#visitNum x end
   let num x = new num x
   class add a b = object (_ : #exp) method accept v = v#visitAdd a b  
end
   let add x = new add x
   class eval = object (eval)
     method private visit (e : exp) = e#accept (Lazy.force X.eval)
     method visitNum (i : int) = i
     method visitAdd r l = eval#visit r + eval#visit l
   end
   let eval = lazy (new eval)
end

module rec Add : AddT with type('a,'exp) t = ('a,'exp) visit_add =  
AddF(Add)

class type ['a,'exp] visit_add_neg = object
   inherit ['a,'exp] visit_add
   method visitNeg : 'exp -> 'a
end

module type AddNegT = sig
   include AddT
   val neg : exp -> exp
end

module AddNegF(X : AddNegT with
                type ('a,'exp) t = private ('a,'exp) #visit_add_neg) =
struct
   module Add = AddF(X)
   include (Add : AddTE with type ('a,'e) t = ('a,'e) X.t)
   class neg x = object (_ : #Add.exp) method accept v = v#visitNeg x  
end
   let neg x = new neg x
   class eval = object (eval)
     inherit Add.eval
     method visitNeg e = - eval#visit e
   end
   let eval = lazy (new eval)
end

module rec AddNeg : AddNegT with type ('a,'exp) t = ('a,'exp)  
visit_add_neg =
   AddNegF(AddNeg)

open AddNeg
let n = (add (neg (add (num 3) (num 2))) (num 7))#accept (Lazy.force  
eval)
			
========================================================================
4) glome-0.2
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
899842ff3c645c8d/74c5d0565cd66ce0#74c5d0565cd66ce0>
------------------------------------------------------------------------
** Jim Snow announced:

New version (0.3) of glome is up: now with a new more elegant (and
presumably asymptotically optimal but still abysmally slow) kdtree
compiler, which unfortunately segfaults on large models.  (Thanks to the
"marshalling limits" thread, I discovered that the size of the kdtree I
can construct depends on the size of the stack.  I shall have to look
into this further.)  It is also free of objects - I used Jon Harrop's
suggestion to implement mutual recursion by passing the generic
rayintersection function as an argument to any of the rayintersection
functions that need to be recursive.  (It seems to me like an odd
solution, but I think I can live with it.)  Mutation has been removed
from much of the rayintersection code, (also at Jon's suggestion) which
actually sped things up noticeably (about 15% or so).
Webpage is here: <http://syn.cs.pdx.edu/~jsnow/glome/>

Jon Harrop wrote:
 > It didn't the last time I looked. Using "include" instead of  
"open" is often
 > faster, probably for that reason.


I'll have to experiment with that and see what happens.

 > > There are some hybrid renderers that do just that.  There are some
 > > reasons not to do that, though; for instance, ray tracing scales  
better
 > > on big models (see, for instance,
 > > <http://www.openrt.de/Gallery/OliverDeussen_Sunflowers/Images/ 
sunflowers_2.jpg>)
 > That simply isn't true. You can use trees with OpenGL and get the  
same
 > asymptotic efficiency whilst also being 100x faster in real terms.

 > I've written a planet renderer that adaptively tesselates a 2^80- 
triangle
 > planet in real-time for OpenGL.

 > I've written a 2D vector graphics engine that adaptively  
tesselates PostScript
 > paths into triangles so you can fly around 1Gb PDF images in real  
time.

 > If what you said was true, that wouldn't have been possible.


Perhaps I should be more specific about exactly what it is that is
scaling.  With level-of-detail schemes (which could apply to ray-tracing
as well as GL), you can render datasets of enormous complexity, provided
you aren't trying to render it all at the same time.  Your planet demo
looks very interesting, but it looks like your polygon counts at any
particular moment aren't very high.  If you add some realistic
vegetation, the high polgygon counts would slow things down quite a
bit.  OpenGL scales linearly with the number of triangles it has to
draw; ray-tracers scale logarithmically.  You can avoid some of the
memory overhead of large scenes by using instancing, but GL still has to
draw every single triangle.
Ray-tracing has its own costs; sorting an acceleration structure, for
instance, can be very slow.  Also, they currently only surpass the
performance of traditional polygon renderers on very complex scenes.
For most current rendering problems, it makes more sense to use GL right
now.  But as computers get faster, and real-time global illumination
starts to become feasible, ray tracing is likely to look very
appealing.  This is my opinion; you are free to disagree.

 > Ray tracing is simply a bad way to render images, unless they
 > are closeups of reflective spheres.


Opinions vary.  So do datasets and application requirements.
 > > So, I switched over to objects.  This reduced
 > > memory use a little, I think, but didn't help much.  It did make  
things a
 > > little slower, though.  There's some more detailed discussion  
over at
 > > ompf.org: <http://ompf.org/forum/viewtopic.php?t=336>

 > What is the memory use of my version like?


About 1.5 gigs for the 800k triangle level 4 sphereflake, same as my
version 0.2.  I think the memory consumption is elsewhere.  Most of the
memory gets consumed as the kdtree is being built.
 > Apart from the texture mapping bugs, check out these screenshots  
of my planet
 > demo. Now write me a ray tracer that can do that...

I doubt that whatever level-of-detail algorithms you employ in any way
preclude the use of raytracing, it would just be rather slow.  (The
OpenRT people, for instance, are working on a drop-in OpenGL
replacement, and the XFRT are working on an OpenRT replacement that is
actually open.)  Now write me an OpenGL app that can render this
correctly: <http://graphics.ucsd.edu/~henrik/images/gbox.jpg> :)
Nathaniel Gray wrote:
 > I wonder if you really need the mutual recursion.  You can often  
avoid
 > mutual recursion by using closures.  Instead of, say, a list of
 > objects with an isect (intersect) method you can use a list of
 > closures.


That's more or less what my original implementation did.  I switched to
objects because I wasn't sure if closures were allocating space
efficiently.  Then I switched to my current implementation because
calling object methods is slow (as evidenced by the results presented in
the "Benchmarking different dispatch types" thread).  In the end, I
don't think it made a big difference - I'm just not intersecting with
very many primitives per ray.  Every little bit helps, though.
			
========================================================================
5) Final Call for Papers: TFP 2007, New York, USA
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
ac2f574a24b3e4b7/41a21bdf4355fad2#41a21bdf4355fad2>
------------------------------------------------------------------------
** Marco T Morazan announced:

CALL FOR PAPERS
Trends in Functional Programming 2007
New York, USA
April 2-4, 2007
<http://cs.shu.edu/tfp2007/>

NEW: Abstract submission is now opened! Link:
<http://cs.shu.edu/tfp2007/submissions.html>

NEW: Invited Talk: John McCarthy, Standford University

The symposium on Trends in Functional Programming (TFP) is an
international forum for researchers with interests in all aspects of
functional programming languages, focusing on providing a broad view of
current and future trends in Functional Programming. It aspires to be a
lively environment for presenting the latest research results through
acceptance by extended abstracts. A formal post-symposium refereeing
process then selects the best articles presented at the symposium for
publication in a high-profile volume.
TFP 2007 is co-hosted by Seton Hall University and The City College  
of New
York (CCNY) and will be held in New York, USA, April 2-4, 2007 at the  
CCNY
campus.

SCOPE OF THE SYMPOSIUM

The symposium recognizes that new trends may arise through various  
routes.
As part of the Symposium's focus on trends we therefore identify the
following five article  categories. High-quality articles are  
solicited in
any of these categories:

Research Articles        leading-edge, previously unpublished research
work
Position Articles          on what new trends should or should not be
Project Articles           descriptions of recently started new projects
Evaluation Articles       what lessons can be drawn from a finished
project
Overview Articles         summarizing work with respect to a trendy
subject

Articles must be original and not submitted for simultaneous publication
to any other forum. They may consider any aspect of functional
programming: theoretical,  implementation-oriented, or more
experience-oriented. Applications of functional programming  
techniques to
other languages are also within the scope of the symposium.

Articles on the following subject areas are particularly welcomed:

o Dependently Typed Functional Programming
o Validation and Verification of Functional Programs
o Debugging for Functional Languages
o Functional Programming and Security
o Functional Programming and Mobility
o Functional Programming to Animate/Prototype/Implement Systems from
Formal or Semi-Formal Specifications
o Functional Languages for Telecommunications Applications
o Functional Languages for Embedded Systems
o Functional Programming Applied to Global Computing
o Functional GRIDs
o Functional Programming Ideas in Imperative or Object-Oriented Settings
(and the converse)
o Interoperability with Imperative Programming Languages
o Novel Memory Management Techniques
o Parallel/Concurrent Functional Languages
o Program Transformation Techniques
o Empirical Performance Studies
o Abstract/Virtual Machines and Compilers for Functional Languages
o New Implementation Strategies
o any new emerging trend in the functional programming area

If you are in doubt on whether your article is within the scope of TFP,
please contact the TFP 2007 program chair, Marco T. Morazan, at
tfp2007 at shu.ed.

SUBMISSION AND DRAFT PROCEEDINGS

Acceptance of articles for presentation at the symposium is based on the
review of extended abstracts (6 to 10 pages in length) by the program
committee. Accepted  abstracts are to be completed to full papers before
the symposium for publication in the draft proceedings and on-line.
Further details can be found at the TFP 2007 website.

POST-SYMPOSIUM REFEREEING AND PUBLICATION

In addition to the draft symposium proceedings, we intend to continue  
the
TFP tradition of publishing a high-quality subset of contributions in  
the
Intellect series on Trends in Functional Programming.

IMPORTANT DATES

Abstract Submission: February 1, 2007
Notification of Acceptance: February 20, 2007
Registration Deadline: March 2, 2007
Camera Ready Full Paper Due: March 9, 2007
TFP Symposium: April 2-4, 2007

PROGRAMME COMMITTEE

John Clements                   California Polytechnic State University,
USA
Marko van Eekelen               Radboud Universiteit Nijmegen, The
Netherlands
Benjamin Goldberg               New York University, USA
Kevin Hammond           University of St. Andrews, UK
Patricia Johann                 Rutgers University, USA
Hans-Wolfgang Loidl             Ludwig-Maximilians Universität München,
Germany
Rita Loogen                     Philipps-Universität Marburg, Germany
Greg Michaelson         Heriot-Watt University, UK
Marco T. Morazán (Chair)        Seton Hall University, USA
Henrik Nilsson                  University of Nottingham, UK
Chris Okasaki                   United States Military Academy at West
Point, USA
Rex Page                        University of Oklahoma, USA
Ricardo Pena                    Universidad Complutense de Madrid, Spain
Benjamin C. Pierce              University of Pennsylvania, USA
John Reppy                      University of Chicago, USA
Ulrik P. Schultz                                University of Southern
Denmark, Denmark
Clara Segura                    Universidad Complutense de Madrid, Spain
Jocelyn Sérot                   Université Blaise Pascal, France
Zhong Shao                      Yale University, USA
Olin Shivers                    Georgia Institute of Technology, USA
Phil Trinder                    Heriot-Watt University, UK
David Walker                    Princeton University, USA

ORGANIZATION

Symposium Chair:        Henrik Nilsson, University of Nottingham, UK
Programme Chair:        Marco T. Morazan, Seton Hall University, USA
Treasurer:              Greg Michaelson, Heriot-Watt University, UK
Local Arrangements:     Marco T. Morazan, Seton Hall University, USA

************************************************************************ 
*** *********
Dr. Marco T. Morazan
TFP 2007
Program Committee Chair
<http://cs.shu.edu/tfp2007/>
			
========================================================================
6) json-static: syntax magic for JSON
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
cf6c1a4da83ea00b/16e5d2d0f6858b52#16e5d2d0f6858b52>
------------------------------------------------------------------------
** Martin Jambon announced:

I am releasing a syntax extension which translates quasi-OCaml type
definitions into converters from/to generic JSON data:

    <http://martin.jambon.free.fr/json-static.html>
    (version 0.9.0, BSD license)

JSON (JavaScript Object Notation) is a language-neutral data format,  
which
is readily available in JavaScript, but also for many other
programming languages thanks to its simplicity.

This tool uses the json-wheel library that Mika Illouz and myself  
released
a few weeks ago. Using the json-static facility is not mandatory at all,
but it can be a timesaver.

For example, the following declaration defines the type of a point  
object:

    type json point = < x: float; y: float >

This automatically makes two functions available, with the following
signature:

    val json_of_point : point -> Json_type.t
    val point_of_json : Json_type.t -> point

Json_type.t is the type of any JSON data, as defined by the json-wheel
library.

A typical use of this library is illustrated by an HTTP client that
queries a web service which returns JSON data. There is an example of a
program that queries Yahoo!'s JSON web service:

    <http://martin.jambon.free.fr/yahoo.ml.html>
			
========================================================================
7) Time stamp module floating around?
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
e7bb72d6a5eae113/b6379fd98ada7a7a#b6379fd98ada7a7a>
------------------------------------------------------------------------
** Denis Bueno asked and Chris King answered:

 > I'm looking for simple bit of code that will print human-readable
 > timestamps. Something suitable for a log file.

I noticed Calendar [1] on the Hump a while ago... it looks pretty
useful.  The API includes a pretty printer.
[1] <http://www.lri.fr/~signoles/prog.en.html>
			
** skaller also answered:

(*
Scrap used in Felix for lines like:
//Timestamp: 2007/1/12 18:36:37 UTC
//Timestamp: 2007/1/13 5:36:37 (local)
*)

let tim() =
   let now = (Unix.times()).Unix.tms_utime in
   let elapsed = now -. !last_time in
   last_time := now;
   elapsed
;;

let format_time tm =
   si (tm.Unix.tm_year + 1900) ^ "/" ^
   si (tm.Unix.tm_mon + 1) ^ "/" ^
   si tm.Unix.tm_mday ^ " " ^
   si tm.Unix.tm_hour ^ ":" ^
   si tm.Unix.tm_min ^ ":" ^
   si tm.Unix.tm_sec
;;
try
   (* Time initialisation *)
   let compile_start = Unix.time () in
   let compile_start_gm = Unix.gmtime compile_start in
   let compile_start_local = Unix.localtime compile_start in
   let compile_start_gm_string = format_time compile_start_gm ^ "  
UTC" in
   let compile_start_local_string = format_time compile_start_local ^
" (local)" in
			
========================================================================
8) native-code stack backtraces
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
cdd54b032895c17f/9af1209426f60400#9af1209426f60400>
------------------------------------------------------------------------
** Xavier Leroy said:

Among the 150 messages that accumulated while I was at the POPL
conference, I notice that the topic of native-code exception
backtraces is making its come-back:
 > Well, seeing that the very useful native exception backtrace patch  
has
 > been
 > sitting idle (acknowledged) for more than a year, I think that's not
 > working it out too well :\

Like elephants, I'm slow, but never forget :-)  You'll be pleased to
learn that I've been working recently on exception backtraces for
ocamlopt.  The code (nearly finished) currently sits in the
"opt_backtrace" branch of the repository and should be part of
release 3.10.
It implements an exception backtrace mechanism similar to that already
available in bytecode, but different from Markus Mottl's call backtrace.
The two kinds of backtraces are incomparable in general, but while
Markus's solution is quite low overhead already, mine has even lower
overhead, especially in terms of code size.

Native-code backtraces will be available initially for the following
back-ends: i386, amd64 and powerpc, both for Unix-like OS and for
Windows.  Ports to other back-ends will be considered if there is
sufficient demand.
			
========================================================================
9) color on linux terminal
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
b99a2a27411acf50/e35861a581612fe9#e35861a581612fe9>
------------------------------------------------------------------------
** Continuing the thread from last week, Xavier Leroy said:

 > This is my biggest "pet peeve" about OCaml.  It uses *decimal*  
escapes
 > for characters,

Hexadecimal is also supported ("\x1b[31m blabla") and highly  
recommended.
 > not octal like everywhere else in the UNIX and C-influenced universe.

Well, how many fingers do you have? :-)  Octal is a left-over from the
PDP-11 days and should have been abandoned a long time ago.
			
========================================================================
10) va_arg values
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
9663c0be0dfa2a6d/d00690ea470fc554#d00690ea470fc554>
------------------------------------------------------------------------
** Continuing the thread from last week, Xavier Leroy said:

 > value func(value obj, char *method, int n, ...)
 > {
 >   va_list ap;
 >   int i;
 >   value *args = calloc(n + 1, sizeof(value));
 >   value r;
 >   args[0] = obj;
 >   va_start(ap, n);
 >   for (i = 0; i < n; i++)
 >     args[i + 1] = va_arg(ap, value);
 >   va_end(ap);

 >   r = caml_callbackN(caml_get_public_method(obj, hash_variant 
(method)),
 >                      n + 1, args);
 >   free(args);
 >   return r;
 > }

 > Is it an invalid assumption that it is unnecessary to bother with the
 > CAMLparam/CAMLlocal stuff since there's nothing to trip the GC?


It's a valid assumption.  The hard rule is that you must register as
root all local variables that need to survive a call to the GC or to a
function that might trigger the GC.  In general it's hard to keep
track of those variables, which is why the OCaml manual advocates
registering everything that has type "value".  But in your case you
cannot, and the code fragment above look safe to me.  (Make sure you
document this in a big comment, though...)  As others pointed out,
you're also relying on the fact that caml_get_public_method
and hash_variant never trigger a GC, which is true now and in the
foreseeable future, but again a big comment is warranted.
 > Should probably check calloc for success and maybe throw an exception
 > if it failed... Which actually brings me to another quick  
question: if
 > I throw an exception, say caml_failwith("..."), is it necessary to
 > still call CAMLreturn after it? Or will the exception cause the
 > function to exit?

The latter: the exception triggers something like a longjmp(), causing
immediate return from the C function (and de-registration of the local
roots, if any).
 > One last quick question: is the line "args[i + 1] = va_arg(ap,  
value);"
 > above legal? args[] is an array of value and va_arg(ap, value) will
 > return a value. So, essentially, it's the same thing as the  
assignment
 > in the following example:
 > value func(value v1)
 > {
 >   value v2 = v1;
 >   ...
 > }

 > I know values are just pointers so it is syntactically correct, but
 > what I'm asking is: is it safe to do? Should I be using some function
 > instead to create a copy of the value?


No copying is necessary.
			
========================================================================
11) function definition
Archive: <http://groups.google.com/group/fa.caml/browse_frm/thread/ 
d569a370c4f76923/bc4e240849b8349d#bc4e240849b8349d>
------------------------------------------------------------------------
** Vu Ngoc San asked and Jon Harrop answered:

 > I'm sure this is a basic question:
 > what is the difference between these ways of defining a function, and
 > what is the most efficient (I mean for the resulting function f =  
binop
 > o f1 f2, which typically will be called x*1000 times)


That's a very hard question, and is probably platform specific but  
I'll throw
some ideas at you off the top of my head. I'm sure someone like Brian  
Hurt
will dive into the assembler and prove me wrong. ;-)
 > type operator = Plus | Minus;;
 > let binop1 o f1 f2 =
 >    fun x -> match o with
 >    | Plus -> (f1 x) +. (f2 x)
 >    | Minus -> (f1 x) -. (f2 x)


That is equivalent to:
   let binop1 o f1 f2 x = ..

it waits for all arguments before doing anything. ocamlopt optimises  
currying
for that case.

 > let binop2 o f1 f2 =
 >    match o with
 >    | Plus -> fun x -> (f1 x) +. (f2 x)
 >    | Minus -> fun x -> (f1 x) -. (f2 x)

After 3 args, this returns a closure waiting for the fourth arg.
 > let binop3 o f1 f2 =
 >    let op = match o with
 >    | Plus ->  (+.)
 >    | Minus -> (-.) in
 >    fun x -> op (f1 x) (f2 x)

After 3 args, this creates a closure to do the op and another closure  
that
captures the first closure. ocamlopt might inline the closure "op" but I
doubt it.
 > let binop4 o f1 f2 =
 >    fun x ->
 >      let op = match o with
 >    | Plus ->  (+.)
 >    | Minus -> (-.) in
 >      op (f1 x) (f2 x)

This waits for all four args again (same as first case) and the  
closure "op"
might be inlined.
Assuming you invoke the function will all four arguments, I would  
expect the
first solution to be the fastest by a significant margin. If you  
factor out a
closure of binop with its first three arguments passed and use it  
multiple
times then the second solution might be faster.

I've found this with a pattern matcher written in OCaml that was  
faster when
the pattern matcher evaluated when partially applied to the pattern,
returning a closure that matched against the pattern it had been  
partially
applied to. I was surprised to find that. I still don't know why that  
would
be faster...
			
========================================================================
Using folding to read the cwn in vim 6+
------------------------------------------------------------------------
Here is a quick trick to help you read this CWN if you are viewing it  
using
vim (version 6 or greater).

:set foldmethod=expr
:set foldexpr=getline(v:lnum)=~'^=\\{78}$'?'<1':1
zM
If you know of a better way, please let me know.

========================================================================
Old cwn
------------------------------------------------------------------------

If you happen to miss a CWN, you can send me a message
(alan.schmitt at polytechnique.org) and I'll mail it to you, or go take  
a look at
the archive (<http://alan.petitepomme.net/cwn/>) or the RSS feed of the
archives (<http://alan.petitepomme.net/cwn/cwn.rss>). If you also wish
to receive it every week by mail, you may subscribe online at
<http://lists.idyll.org/listinfo/caml-news-weekly/> .

========================================================================

-- 
Alan Schmitt <http://alan.petitepomme.net/>

The hacker: someone who figured things out and made something cool  
happen.
  .O.
  ..O
  OOO


-------------- next part --------------
A non-text attachment was scrubbed...
Name: PGP.sig
Type: application/pgp-signature
Size: 186 bytes
Desc: This is a digitally signed message part
Url : http://lists.idyll.org/pipermail/caml-news-weekly/attachments/20070123/0eec2e88/attachment-0001.pgp 


More information about the caml-news-weekly mailing list