Hi all.<br>I'm using the latest release of ecl - 12.7.1. The question
may be not about the ecl at all. But there a misunderstanding: what
encoding ecl uses by default during conversation between lisp and c? The
nice project <a href="http://createuniverses.blogspot.com/2009/09/qtimagine-skeleton-project-for-live.html" target="_blank">http://createuniverses.blogspot.com/2009/09/qtimagine-skeleton-project-for-live.html</a> doesn't works on ecl with enabled unicode support, because the code<br>
cl_object princed = cl_princ_to_string(output); // output was "No error" for example<br> std::string outStr = (char*)princed->base_string.<div id=":51">self; //outStr is just "N" now<br>
become wrong, and base_string.self currently contains a pointer to a wchar_t 32-bit unicode string.<u> (Is there any way to generate simple 8-bit ASCII string from cl_objects?)<br></u>I tried to use QString to convert wchar_t* into std::string: QString::fromWCharArray((wchar_t*)princed->base_string.self).toStdString() but it doesn't work because, the string turned out not to be null-terminated. An it puls a lot of garbege from memory. <u>How should i properly handle this strings? What is the most convinient way to convert cl_obects into strings?</u> <u><br>
</u>
Furthermore, as an input string it requires an ASCII char* pointer, and provided unicode one it fails(it just reads only the first symbol, because it is followed by 3 nulls: L"(print 1)" -> "(\000\000\000p\000\000\000...").<br>
<br>It looks quite wired: inputing from char*, and outputing to wchar_t* and only in this way.<br><br>The version with disabled unicode support first goes to fail on Windows7x64 with: Unsupported external format: latin-1. <u>Does any one know which code page can be accepted and worked with?</u></div>
<br clear="all"><br>-- <br><font style="font-family:times new roman,serif" size="2" face="georgia,serif">Necto</font>.<br><br>