[lisplab-cvs] r170 - trunk/doc/manual
Jørn Inge Vestgården
jivestgarden at common-lisp.net
Mon May 17 19:59:34 UTC 2010
Author: jivestgarden
Date: Mon May 17 15:59:34 2010
New Revision: 170
Log:
Fixing manual. Not finished
Modified:
trunk/doc/manual/lisplab.texi
Modified: trunk/doc/manual/lisplab.texi
==============================================================================
--- trunk/doc/manual/lisplab.texi (original)
+++ trunk/doc/manual/lisplab.texi Mon May 17 15:59:34 2010
@@ -42,28 +42,30 @@
@node Introduction
@chapter Introduction
-Lisplab is a mathematics library in Common Lisp. It is
-placed under the Gnu general public license and offers
+Lisplab is a mathematics/matrix library in Common Lisp. It is
+placed under the GNu General Public License (GPL) and offers
an easy-to-use and rich programming framework for mathematics,
-including linear algebra,
+including Matlab-like matrix handling, linear algebra,
Fast Fourier Transform, special functions, Runge-Kutta solver,
-infix notation, and general matrix utility functions.
+infix notation, and interfaces to BLAS, LAPACK, FFTW, SLATEC and
+QUADPACK.
+
The name Lisplab is inspired by Matlab and Lisplab offers
-much of the same kind of programming style as Matlab, with high level
-manipulation of matrices. Contrary to Matlab, Lisplab can do a
-lot more than just matrix manipulation because Lisplab is
-based on Common Lisp. Hence, you get all
-the benefits of lexical scope, dynamic scope, macros, CLOS, fast execution,
+much of the same kind of programming-style as Matlab, with high level
+manipulation of matrices, but contrary to Matlab, Lisplab
+benefits from being a part of Common Lisp. Hence, you have
+lexical scope, dynamic scope, macros, first class
+functions, iterations, CLOS, fast execution,
and working in an free general purpose programming language. And
best of all: you can enjoy you favorite data-types in addition
to the matrices: functions, hash tables, structures,
classes, arbitrary precision integers, rationals, and lists.
Lisplab is not unique in building Matlab-like
-syntax on top of Common Lisp. Other Common Lisp matrix libraries
-are Matlisp, Femlisp and NLISP. Lisplab itself was
-started as a branch of Matlisp, but has now moved
-far from the original code mass.
+syntax on top of Common Lisp. Other similar framweorks are
+Matlisp, Femlisp, NLISP and GSLL. Lisplab itself was
+started as a branch of Matlisp, but there is now only
+little of the original code left.
@node Getting started
@@ -78,15 +80,17 @@
@item FFTW -- The fastest Fast Fourier Transform available.
@end itemize
+ at section Portability
Lisplab has been developed with SBCL, SLIME and ASDF on Linux,
and there are yet unnecessary bindings to these platforms.
@itemize
@item Some of the optimized lisp code uses the
-SBCL macro @code{truly-the}. This must be dealt with.
- at item The FFTW FFI is only for SBCL.
- at item The Matlisp FFI should be portable to other lisps and
-Windows, but it has not been tested.
- at item The @code{*READ-DEFAULT-FLOAT-FORMAT*} must be @code{double-float}.
+SBCL macro @code{truly-the}.
+ at item The FFTW FFI works only for SBCL.
+ at item The Matlisp FFI should in theory be portable to other lisps and
+Windows, but it has not yet been tested.
+ at item The @code{*READ-DEFAULT-FLOAT-FORMAT*} must be @code{double-float}
+when compiling Slatec. This is a minor problem.
@end itemize
Except from this, Lisplab should be self-contained and not depend on
@@ -127,9 +131,9 @@
If you have problems loading, first look at @code{start.lisp}
and see if you can hack it. Then look at @code{lisplab.asd}.
-To install Blas, Lapack, and FFTW, if you are lazy, and lucky
-enough to administer a Debian or Ubuntu machine,
-you typically do
+To install BLAS, LAPACK, and FFTW, if you are too lazy to do a custom build,
+and is lucky enough to administer a Debian or Ubuntu machine,
+you typically write
@example
# aptitude install libatlas3gf-base
# aptitude install libfftw3-3
@@ -152,7 +156,7 @@
@code{.+}, @code{.-}, @code{.*}, @code{./}, @code{.^}, @code{.sin}
@code{.cos}, @code{.tan}, @code{.besj}, @code{.re}, etc.
On numbers these functions work as the non-dotted Common Lisp functions
-and on matrices they work element-vise.
+and on matrices they work elementwise.
@item
Linear algebra functions tend to start with @i{m}:
@code{m*}, @code{minv}, @code{mmax}, @code{mtp}, etc.,
@@ -167,30 +171,12 @@
@section Status - past and future
-Lisplab contains a lot of linear algebra stuff, but
-in the future it is hope it can be broad mathematics programming environment,
-not just linear algebra.
-
Lisplab has been developed for physics simulations and data handling.
-Lisplab started as a refactoring of Matlisp, but the code
-has been to large degree rewritten, except the
-interfaces to Blas and Lapack. Currently,
-Lisplab and Matlisp have more or less the same functionality.
-Lisplab differ from Matlisp in the following ways
- at itemize
- at item Implementation.
- at item Layered structure (@xref{Structure}.)
- at item Shorter names.
- at item Lisplab uses the standard Lapack and Blas libraries, not a special build.
- at item Rich matrix class hierarchy.
- at end itemize
-
-The future I will mainly do minor changes and bug-fixes,
-since it now covers my basic needs. I will only add new
-modules when I personally needed them.
+Lisplab started as a refactoring of Matlisp, but most of the code
+has been replaced and a lot new code has been written.
+The only Matlisp code that is kept is the interfaces to BLAS and LAPACK.
-However, there are many exiting extensions that can be made,
-such as
+Some large extensions that could be fun to do:
@itemize
@item Parallel computation, e.g. using MPI.
@item More native linear algebra routines, e.g. eigenvalue computation.
@@ -201,13 +187,15 @@
@item New matrix optimization for new usage, e.g. integer matrices for
image processing or cryptography.
@item Interface to new foreign libraries, e.g. GSL.
+ at item More special functions.
@end itemize
-So it this sounds interesting, please contact if you want to contribute.
+An of course a lot more linear algebra and matrix stuff.
+
@section Bugs and limitations
The purpose of Lisplab is to be a platform for
mathematical computations. From this perspective it
-is clear it will never be complete. Also, since there is no
+is clear that it will never be complete. Also, since there is no
spec it is not obvious what is a bug and what is not!
Hence, the list in this section must be read as non-systematic gathering
@@ -223,25 +211,6 @@
@item Lacks error checks (but these should not be made before a spec!)
@end itemize
-Missing features:
- at itemize
- at item There is no way to iterate through the elements of
-a general matrix in a fast way. (The map functions are currently the only
-thing, but these are structure agnostic and also not fast.) There
-should maybe be an macro @code{w/matrix}.
- at item There should be linear algebra primitives, like row exchange,
-in level 2, so that level 3 can be made entirely without knowledge about
-internal structure of matrices. (Structure similar to blas - lapack)
- at item Integer matrices.
- at item Vectorized execution of operations.
- at item Numerical integration.
- at item Symbolic math. Should be separate module, only with knowledge
-of the dotted algebra generic functions.
- at item The dotted algebra should also work on functions
-so the one could write @code{(.+ (lambda (x) (+ x 1)) 3)}
-and get a new functions as result. It might even
-be possible to make beautiful optimizations this way.
- at end itemize
@node Tutorial
@@ -732,16 +701,21 @@
@chapter Discussion
@section The foreign function interfaces
+The foreign function interface comes from Matlisp and
+is mainly on level 3.
The elements of Lisplab typed matrices (double-float and complex double-float) are
stored as 1D simple arrays, and most Lisps will then have
a SAP inside the array pointer which is binary compatible
with Fortran. This adds some overhead to matrix element references,
-but simplifies the garbage collections.
+but simplifies the memory management considerably compared to a bare pointer type.
+Also, it is important to avoid pointers from being moved
+by the garbage collector. In Matlisp, this was handled by stopping
+the garbage collector, but a more smoother and better way would be to
+just pin the pointers in action.
-Matlisp has an extra layer with Fortran compatibility above
+Lisplab has an extra layer with Fortran compatibility above
the ordinary C FFI. This layer is mainly unchanged compared
-to Matlisp, but only the SBCL FFI is in the code. The
-FFIs for other lisps could easily be added, but not without testing.
+to Matlisp, but only the SBCL FFI is distributed with Lisplab.
The FFI for FFTW is a mock-up for SBCL, and it only is for standard
complex transforms and inverse transforms. Of course, a lot more can
@@ -759,6 +733,28 @@
in the build. The symbolic code need not know anything about the rest of Lisplab,
except for the generic methods of the dotted algebra (The level 0).
+
+ at section Missing features
+ at itemize
+ at item There is no way to iterate through the elements of
+a general matrix in a fast way. (The map functions are currently the only
+thing, but these are structure agnostic and also not fast.) There
+should maybe be an macro @code{w/matrix}.
+ at item There should be linear algebra primitives, like row exchange,
+in level 2, so that level 3 can be made entirely without knowledge about
+internal structure of matrices. (Structure similar to blas - lapack)
+ at item Integer matrices.
+ at item Vectorized execution of operations.
+ at item Numerical integration.
+ at item Symbolic math. Should be separate module, only with knowledge
+of the dotted algebra generic functions.
+ at item The dotted algebra should also work on functions
+so the one could write @code{(.+ (lambda (x) (+ x 1)) 3)}
+and get a new functions as result. It might even
+be possible to make beautiful optimizations this way.
+ at end itemize
+
+
@c End stuff
@c @node Index
More information about the lisplab-cvs
mailing list