Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Serialization library, request for feedback

Reply
Thread Tools

Serialization library, request for feedback

 
 
Ulf Åström
Guest
Posts: n/a
 
      12-13-2012
Hello,

I'm writing a serialization library for C. It can convert binary data
(a struct or array, for example) into human-readable text and of
course also read serialized data back and reconstruct it in memory,
including nested pointers. The purpose is to provide a quick way to
save/load in applications and reduce the effort needed to keep file
formats in sync.

The user needs to set up translators (composed of one or more fields)
for each type they wish to serialize. Internally it works by
flattening each type into a list of void pointers and replacing them
with numeric indices. The output format has a simple "<tag> <value>;"
format; it is similar to JSON but it is not a reimplementation of it.

I'm looking for feedback how I can make this most useful to other
people. The code and documentation can be found at:
http://www.happyponyland.net/serialize.php

I'm wondering if there is anything crucial I have missed. It can deal
with most primitive C types, strings, arrays and almost any
arrangement of linked structures. There are still a few pieces
missing; for example I would like to have a clean interface to let the
user provide custom read/write functions for complex data types. I
will also work on the type safety, so structures can't be linked
incorrectly by erroneous or malicious input.

How should I structure the documentation and what level of detail does
it need (I'm guessing more is always better)? Is the API consistent?

I'm not asking anyone to proofread my 3000 lines of spaghetti, but of
course I would also appreciate to hear thoughts on the safety and
portability of the source itself. I'm open to suggestions how to
improve the conversion process.

Finally, do you think this could be genuinely useful? I'm doing it
just for fun and for use in my personal projects, but is there any
niche in programming where it would fit a need?

/Ulf
 
Reply With Quote
 
 
 
 
Keith Thompson
Guest
Posts: n/a
 
      12-13-2012
Ulf Åström <(E-Mail Removed)> writes:
[...]
> The user needs to set up translators (composed of one or more fields)
> for each type they wish to serialize. Internally it works by
> flattening each type into a list of void pointers and replacing them
> with numeric indices. The output format has a simple "<tag> <value>;"
> format; it is similar to JSON but it is not a reimplementation of it.

[...]

Why not just use JSON? It would make the flattened files accessible by
other tools.

--
Keith Thompson (The_Other_Keith) http://www.velocityreviews.com/forums/(E-Mail Removed) <http://www.ghoti.net/~kst>
Will write code for food.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
 
Reply With Quote
 
 
 
 
Ulf Åström
Guest
Posts: n/a
 
      12-13-2012
On Dec 13, 10:57*pm, Keith Thompson <(E-Mail Removed)> wrote:
> Ulf Åström <(E-Mail Removed)> writes:
>
> > The user needs to set up translators (composed of one or more fields)
> > for each type they wish to serialize. Internally it works by
> > flattening each type into a list of void pointers and replacing them
> > with numeric indices. The output format has a simple "<tag> <value>;"
> > format; it is similar to JSON but it is not a reimplementation of it.

>
> Why not just use JSON? *It would make the flattened files accessible by
> other tools.


I'm considering it, but I haven't read the JSON specs in detail yet.
At a glance they seem to be mostly compatible but I don't know how
well it would mesh with my memory offset <-> value layout or how I
would perform pointer substitution. I haven't decided if I want to
pursue a fully compliant JSON implementation, or take the format in
another direction entirely.

Also, there is always the question: Why not use C#, Java, or anything
else that has these things built in? Hm.

/Ulf
 
Reply With Quote
 
Ben Bacarisse
Guest
Posts: n/a
 
      12-13-2012
Ulf Åström <(E-Mail Removed)> writes:

> I'm writing a serialization library for C. It can convert binary data
> (a struct or array, for example) into human-readable text and of
> course also read serialized data back and reconstruct it in memory,
> including nested pointers. The purpose is to provide a quick way to
> save/load in applications and reduce the effort needed to keep file
> formats in sync.
>
> The user needs to set up translators (composed of one or more fields)
> for each type they wish to serialize. Internally it works by
> flattening each type into a list of void pointers and replacing them
> with numeric indices. The output format has a simple "<tag> <value>;"
> format; it is similar to JSON but it is not a reimplementation of it.
>
> I'm looking for feedback how I can make this most useful to other
> people. The code and documentation can be found at:
> http://www.happyponyland.net/serialize.php
>
> I'm wondering if there is anything crucial I have missed. It can deal
> with most primitive C types, strings, arrays and almost any
> arrangement of linked structures.


The most striking omission are support for bool, wide strings and
complex types. This is not a criticism, just an observation.

The other area that I could not understand is how to serialise multiple
pointers into a single object. For example, how does one serialise

struct ring_buffer {
T buffer[BSIZE];
T *head, *tail;
};

where 'head' and 'tail' both point to an element of 'buffer'? This may
be just because there's no documentation yet -- I had a look at the API
and did not immediately see how this could be done.

As for the design itself, I got stuck on an issue in the first example.
You seem to require a dummy object in order to pass pointers to it's
members to the set-up functions, and a comment purports to explain why
this is needed but I just don't see it. The comment includes a
statement that is wrong (if I understand it) that a pointer to the first
member of a field may not equal (when suitably converted) a pointer to
the structure object itself. Even if this were true, I don't see why
that means you can't just pass the offset of the field. At the least,
more explanation is required.

<snip>
--
Ben.
 
Reply With Quote
 
Ulf Åström
Guest
Posts: n/a
 
      12-13-2012
On Dec 14, 12:11*am, Ben Bacarisse <(E-Mail Removed)> wrote:
> Ulf Åström <(E-Mail Removed)> writes:
> > I'm wondering if there is anything crucial I have missed. It can deal
> > with most primitive C types, strings, arrays and almost any
> > arrangement of linked structures.

>
> The most striking omission are support for bool, wide strings and
> complex types. *This is not a criticism, just an observation.


Ok! I will have a look at this. I don't use them much myself so I
haven't thought of them, but they should be supported.

> The other area that I could not understand is how to serialise multiple
> pointers into a single object. *For example, how does one serialise
>
> * struct ring_buffer {
> * * *T buffer[BSIZE];
> * * *T *head, *tail;
> * };
>
> where 'head' and 'tail' both point to an element of 'buffer'? *This may
> be just because there's no documentation yet -- I had a look at the API
> and did not immediately see how this could be done.


Here is a quick example (assuming int for T):

struct ring_buffer rbuf;
tra = ser_new_tra("ring_buffer", sizeof(struct ring_buffer), NULL);
field = ser_new_field(tra, "int", 0, "buffer", &rbuf, &rbuf.buffer);
field->repeat = BSIZE;
ser_new_field(tra, "int", 1, "head", &rbuf, &rbuf.head);
ser_new_field(tra, "int", 1, "tail", &rbuf, &rbuf.tail);

The 1s to ser_new_field indicate that it is a pointer.

This exposed a bug, however; the int pointers will only be repointed
correctly (during inflation) if they point to rbuf.buffer[0]; other
addresses would be duplicated in memory. This is because it will only
check for pointers that exactly match a field offset. It should also
check all elements of an array.

> As for the design itself, I got stuck on an issue in the first example.
> You seem to require a dummy object in order to pass pointers to it's
> members to the set-up functions, and a comment purports to explain why
> this is needed but I just don't see it. *The comment includes a
> statement that is wrong (if I understand it) that a pointer to the first
> member of a field may not equal (when suitably converted) a pointer to
> the structure object itself. *Even if this were true, I don't see why
> that means you can't just pass the offset of the field. *At the least,
> more explanation is required.


It's a typo; it should say "pos must be within st".

Using a dummy isn't strictly necessary, you may as well pass zero and
an offset. I designed it this way so people won't hardcode offset
values, then change their structures (or start porting their program
to a different architecture!) but forget to update the translator.
There doesn't seem to be any other reliable way to get the offset of a
member (offsetof() with null cast is undefined behaviour, according to
Wikipedia). Anyway, the dummy is only needed when setting up the
translator. Perhaps I could change it to only take an offset but
suggest using a dummy to calculate it, e.g. (&rbuf.head - &rbuf).

Good suggestions, thanks a lot.

/Ulf
 
Reply With Quote
 
Ben Bacarisse
Guest
Posts: n/a
 
      12-14-2012
Ulf Åström <(E-Mail Removed)> writes:

> On Dec 14, 12:11Â*am, Ben Bacarisse <(E-Mail Removed)> wrote:

<snip>
>> The other area that I could not understand is how to serialise multiple
>> pointers into a single object. Â*For example, how does one serialise
>>
>> Â* struct ring_buffer {
>> Â* Â* Â*T buffer[BSIZE];
>> Â* Â* Â*T *head, *tail;
>> Â* };
>>
>> where 'head' and 'tail' both point to an element of 'buffer'? Â*This may
>> be just because there's no documentation yet -- I had a look at the API
>> and did not immediately see how this could be done.

>
> Here is a quick example (assuming int for T):
>
> struct ring_buffer rbuf;
> tra = ser_new_tra("ring_buffer", sizeof(struct ring_buffer), NULL);
> field = ser_new_field(tra, "int", 0, "buffer", &rbuf, &rbuf.buffer);
> field->repeat = BSIZE;
> ser_new_field(tra, "int", 1, "head", &rbuf, &rbuf.head);
> ser_new_field(tra, "int", 1, "tail", &rbuf, &rbuf.tail);
>
> The 1s to ser_new_field indicate that it is a pointer.


What about pointers to pointers?

> This exposed a bug, however; the int pointers will only be repointed
> correctly (during inflation) if they point to rbuf.buffer[0]; other
> addresses would be duplicated in memory. This is because it will only
> check for pointers that exactly match a field offset. It should also
> check all elements of an array.


Thanks. I think I see what's happening. As I understand it, this
suggests there must be a "root" structure for all of the data, but maybe
what you are doing it more subtle than that.

As an example, imagine a binary tree of nodes and a separate linked list
of node pointers. Can I serialise this? Must I serialise the tree
first?

>> As for the design itself, I got stuck on an issue in the first example.
>> You seem to require a dummy object in order to pass pointers to it's
>> members to the set-up functions, and a comment purports to explain why
>> this is needed but I just don't see it. Â*The comment includes a
>> statement that is wrong (if I understand it) that a pointer to the first
>> member of a field may not equal (when suitably converted) a pointer to
>> the structure object itself. Â*Even if this were true, I don't see why
>> that means you can't just pass the offset of the field. Â*At the least,
>> more explanation is required.

>
> It's a typo; it should say "pos must be within st".


I think that a reference to the API documentation ("Note: pos must be
within pos"). I was talking about the comment in example1.c:

"In this example &thing_dummy and &thing_dummy.a will most probably be
the same location. &thing_dummy.b on the other hand will typically end
up 2 to 8 bytes from the base pointer (depending on the computer
architecture). For this reason we do not pass the offset as a number
but rely on the compiler to tell us exactly where the members will
be."

In your example, &thing_dummy.a and &thing_dummy must *always* be the
same location so the phrase "will most probably be" just looks odd.

That aside, the following bit is what threw me: "For this reason we do
not pass the offset as a number". Nothing previous to this seems to
explain why you pass two pointers rather than an offset.

> Using a dummy isn't strictly necessary, you may as well pass zero and
> an offset. I designed it this way so people won't hardcode offset
> values, then change their structures (or start porting their program
> to a different architecture!) but forget to update the translator.


That is a reason but not, to my mind, a strong one. To complicate the
API because it's possible to so grossly abuse a simpler one seems to be
contrary to the spirit if C. What's more, if you are serious about
this, then you should not permit sizeof(myType) in the earlier call,
since a programmer might hard code the size. Instead, they should be
obliged to pass two pointers: &thing_dummy and &thing_dummy + 1. If you
accept a simple size in one case, permit a simple offset in the other.

> There doesn't seem to be any other reliable way to get the offset of a
> member (offsetof() with null cast is undefined behaviour, according to
> Wikipedia). Anyway, the dummy is only needed when setting up the
> translator. Perhaps I could change it to only take an offset but
> suggest using a dummy to calculate it, e.g. (&rbuf.head - &rbuf).


What's wrong with offsetof? I don't follow what you mean about a "null
cast" being undefined.

BTW, writing &rbuf.head - &rbuf won't work -- the pointers are of the
wrong type. You'd need to cast to char * but that's so messy. I think
offsetof is the way to go.

--
Ben.
 
Reply With Quote
 
Ian Collins
Guest
Posts: n/a
 
      12-14-2012
Ulf Åström wrote:
> On Dec 13, 10:57 pm, Keith Thompson <(E-Mail Removed)> wrote:
>> Ulf Åström <(E-Mail Removed)> writes:
>>
>>> The user needs to set up translators (composed of one or more fields)
>>> for each type they wish to serialize. Internally it works by
>>> flattening each type into a list of void pointers and replacing them
>>> with numeric indices. The output format has a simple "<tag> <value>;"
>>> format; it is similar to JSON but it is not a reimplementation of it.

>>
>> Why not just use JSON? It would make the flattened files accessible by
>> other tools.

>
> I'm considering it, but I haven't read the JSON specs in detail yet.


There isn't a great deal to read...

> At a glance they seem to be mostly compatible but I don't know how
> well it would mesh with my memory offset <-> value layout or how I
> would perform pointer substitution. I haven't decided if I want to
> pursue a fully compliant JSON implementation, or take the format in
> another direction entirely.


Why make things complex? JSON is an ideal candidate for representing
structure and array types. It is after all designed as an object notation.

> Also, there is always the question: Why not use C#, Java, or anything
> else that has these things built in? Hm.


Other languages have better support for manipulating JSON objects, but
at least one of them (PHP) uses a C library under the hood.

--
Ian Collins
 
Reply With Quote
 
Bart van Ingen Schenau
Guest
Posts: n/a
 
      12-14-2012
On Thu, 13 Dec 2012 15:57:27 -0800, Ulf Åström wrote:
>
>Using a dummy isn't strictly necessary, you may as well pass zero and
>an offset. I designed it this way so people won't hardcode offset
>values, then change their structures (or start porting their program
>to a different architecture!) but forget to update the translator.
>There doesn't seem to be any other reliable way to get the offset of a
>member (offsetof() with null cast is undefined behaviour, according to
>Wikipedia). Anyway, the dummy is only needed when setting up the
>translator. Perhaps I could change it to only take an offset but
>suggest using a dummy to calculate it, e.g. (&rbuf.head - &rbuf).


You could also setup the interface like this:

ser_field_t * ser_new_field_impl(ser_tra_t * tra, const char * type,
const int ref, const char * tag,
const int offset);
#define ser_new_field(tra,type,ref,tag,src_type,field) \
ser_new_field_impl(tra, type, ref, tag, offsetof(src_type, field))

Also note that offsetof must be defined by the compiler/implementation
and they can use whatever magic that gets the job done, including the
trick with dereferencing a null pointer.
If you do the same, you incur Undefined Behaviour, but the compiler
itself is above the law in this respect.
That makes offsetof() the easiest and reliable way to obtain the offset
of a member of a structure and it is in the standard just because you
need compiler magic to get the offset without incurring UB or the
overhead of a dummy object.

>
>/Ulf


Bart v Ingen Schenau
 
Reply With Quote
 
BGB
Guest
Posts: n/a
 
      12-14-2012
On 12/14/2012 12:38 AM, Ian Collins wrote:
> Ulf Åström wrote:
>> On Dec 13, 10:57 pm, Keith Thompson <(E-Mail Removed)> wrote:
>>> Ulf Åström <(E-Mail Removed)> writes:
>>>
>>>> The user needs to set up translators (composed of one or more fields)
>>>> for each type they wish to serialize. Internally it works by
>>>> flattening each type into a list of void pointers and replacing them
>>>> with numeric indices. The output format has a simple "<tag> <value>;"
>>>> format; it is similar to JSON but it is not a reimplementation of it.
>>>
>>> Why not just use JSON? It would make the flattened files accessible by
>>> other tools.

>>
>> I'm considering it, but I haven't read the JSON specs in detail yet.

>
> There isn't a great deal to read...
>
>> At a glance they seem to be mostly compatible but I don't know how
>> well it would mesh with my memory offset <-> value layout or how I
>> would perform pointer substitution. I haven't decided if I want to
>> pursue a fully compliant JSON implementation, or take the format in
>> another direction entirely.

>
> Why make things complex? JSON is an ideal candidate for representing
> structure and array types. It is after all designed as an object notation.
>


yep, JSON makes sense.

one possible downside though is that it doesn't normally identify object
types, which means some means may be needed to identify what sort of
struct is being serialized, and/or the type of array, ...

it is either that, or deal with data in a dynamically-typed manner,
rather than directly mapping it to raw C structs.

JSON is generally better IMO for serializing dynamically-typed data,
than for doing data-binding against structs.

another minor problem with JSON is that, in its pure form, it has no
good way to deal with cyclic data (where a referenced object may refer
back to a prior object), but an extended form could allow this.



in my case, one mechanism I have serializes a wider range of data into a
binary format, but in my case, only deals with dynamically type-tagged
data (which basically means that it was allocated via my GC API, and
with a type-name supplied), and also that the typedefs have any relevant
annotations (it depends on data gathered by using a tool to parse the
headers in order to work correctly).

it may try to use a special serialization handler if one is registered,
but will otherwise fall back to using key/value serialization of the
structs.


the basic format (for each member) is:
<typeindex:VLI> <size:VLI> <data:byte[size]>

where 'VLI' is a special encoding for variable-length-integers.
in my case:
00-7F 0-127
80-BF XX 128-16383
C0-DF XX XX 16384-2097151
....

a slight VLI variant (SVLI) encoding signed values by folding the sign
into the LSB, so, the values follow a pattern:
0, -1, 1, -2, 2, ...


the format was basically just a flat-array of serialized members, and
these members were linked via index (and allowed both backwards and
forwards references). it has a limited form of value-identity preservation.

the typeindex members are basically indices into this array, giving the
elements which give the type-name (which, as a simplifying assumption,
are assumed to be ASCII strings).

index 0 is reserved for NULL, and is not encoded. index 1 is the first
encoded index, and serves as the "root member" (basically, the "thing
that the program asked the serializer to serialize").

as a later addition, if the typeindex is 0, then the member is a comment
(and does not appear in the member array). a comment member immediately
at the start of the file is used to indicate the "type" of the file
(basically, it is a "magic string"), which is then followed by the root
member.


a partial drawback was that the format doesn't have any good way to
indicate "how" the data is encoded, making it potentially more subject
to versioning issues (consider, for example, if a structure-layout
changes, ...). I have tried previously to develop self-describing
serialization formats in the past, but trying to make a format fully
self-describing tends to make working with it unreasonably complex. (the
basic idea here would be that not only would the format identify the
types in use, but it would also encode information to describe all of
the data encodings used by the format, down to the level of some number
of "atomic" types, ...).


however, my Script-VM's bytecode serialization format is based on the
above mechanism (the bytecode is actually just the result of serializing
the output from compiling a source-module).



some amount of stuff also uses an S-Expression based notation:
999 //integer number
3.14159 //real number
"text" //string
name //symbol (identifier, used to identify something)
:name //keyword (special type of literal identifier)
name: //field or item name
....
( values ) //list of items (composed of "cons cells")
#( values ) //array of items (dynamically typed)
{ key: value ... } //object (dynamically-typed)
#A<sig> ( ... ) //array (statically-typed)
#X<name> { key: value ... } //struct
#L<name> { key: value ... } //instance of a class
....
#idx# //object index (declaration or reference)
#z //null
#u //undefined
#t //true
#f //false
....


so, for example, a struct like:
typdef dytname("foo_t") as_variant //(magic annotations, 1)
struct Foo_s Foo;

struct Foo_s {
Foo *next;
char *name;
int x;
float y;
double z[16];
}


1: these annotations are no-op macros in a normal C compiler (and mostly
expand to special attributes used by the header-processing tool).

"dytname()" basically gives the type-name that will be used when
allocating instances of this struct-type (it is used to key the
type-name back to the struct).

"as_variant" is basically a hint for how it should be handled by my
scripting language. this modifier asserts that the type should be
treated as a self-defined VM type (potentially opaque), rather than be
treated as a boxed-struct or as a boxed "pointer to a struct" (more
literally mapping the struct and/or struct pointer to the scripting
language, causing script code to see it more like how C would see it).


with the structs being created like:
Foo *obj;
obj=gctalloc("foo_t", sizeof(Foo)); //allocate object with type-tag
obj->name=dystrdup("foo_instance_13"); //make a new tagged string
....


might be serialized as:
#0# = #X<foo_t> { next: #1# name: "foo_instance_13" x: 99 y: 4.9 z:
#A<d> ( 2.0 3.0 ... ) }
#1# = #X<foo_t> { ... }

where this format works, but isn't really, exactly, pretty...



I also have a network protocol I call "BSXRP", which basically works
very similar to the above (same data model, ...), just it uses Huffman
coding and predictive context modeling of the data, and "clever" ways of
VLC coding what data-values are sent. (compression is favorable to that
of S-Expressions + Deflate, typically being around 25% the size, whereas
Deflate by itself was reducing the S-Expressions to around 10% their
original size, or IOW: around 2.5% the size of the textual
serialization). (basically, if similar repeating structures are sent,
prior structures may be used as "templates" for sending later
structures, allowing them to be encoded in fewer bits, essentially
working sort of like a dynamically-built schema).

as-before, it has special cases to allow encoding cyclic data, but the
protocol does not generally preserve "value-identity" (value-identity or
data-identity is its own hairy set of issues, and in my case I leave
matters of identity to higher-level protocols).

some tweaks to the format also allow it to give modest compression
improvements over Deflate when being used for delivering lots of short
plaintext or binary data messages (it essentially includes a Deflate64
like compressor as a sub-mode, but addresses some "weak areas" regarding
Deflate).

a minor drawback though is that the context models can eat up a lot of
memory (the memory costs are considerably higher than those of Deflate).

(it was originally written partly as a "proof of concept", but is,
technically, pretty much overkill).


>> Also, there is always the question: Why not use C#, Java, or anything
>> else that has these things built in? Hm.

>
> Other languages have better support for manipulating JSON objects, but
> at least one of them (PHP) uses a C library under the hood.
>


yeah...

I use variants of both JSON and S-Expressions, but mostly for
dynamically typed data.

not depending on the use of type-tags and data mined from headers would
require a rather different implementation strategy.

most of my code is C, but I make fairly extensive use of dynamic-type
tagging.


so, yeah, all this isn't really a "general purpose" set of solutions for
the data-serialization process.

I suspect that though that there may not actually be any sort of
entirely "general purpose" solution to this problem though...

and, as-is, to use my implementation would probably require dragging
around roughly about 400 kloc of code, and it is very likely that many
people would object to needing to use a special memory manager and
code-processing tools to be able to use these facilities...


or, IOW:
if you allocate the data with "malloc()" or via a raw "mmap()" or
similar, a lot of my code will have no idea what it is looking at (yes,
a lame limitation, I know).

granted, the scripting language can partly work around it:
if you don't use "as_variant" modifier, the struct will map literally,
and ironically, this allows script-code to still use "malloc()" for
these types.

however, the data serialization code generally isn't this clever...


or such...

 
Reply With Quote
 
Ian Collins
Guest
Posts: n/a
 
      12-14-2012
BGB wrote:
> On 12/14/2012 12:38 AM, Ian Collins wrote:
>>
>> Other languages have better support for manipulating JSON objects, but
>> at least one of them (PHP) uses a C library under the hood.

>
> yeah...
>
> I use variants of both JSON and S-Expressions, but mostly for
> dynamically typed data.
>
> not depending on the use of type-tags and data mined from headers would
> require a rather different implementation strategy.
>
> most of my code is C, but I make fairly extensive use of dynamic-type
> tagging.


I originally wrote a JSON library to enable my (C++) sever side web
application it interact with client side JavaScript. I soon found the
objects extremely useful for building dynamic type objects in general
programming. I doubt the same would be true in C, not elegantly at least.

--
Ian Collins
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Feedback from feedback on MCP questions Matt Adamson Microsoft Certification 0 04-27-2009 11:13 AM
how to move from java object serialization to xml serialization? Dimitri Ognibene Java 4 09-02-2006 07:32 AM
Object serialization XML vs java serialization plasticfloor@gmail.com Java 3 06-14-2006 03:45 AM
Serialization Problems and books on serialization? sinleeh@hotmail.com Java 8 01-02-2005 02:40 PM
avoiding XML serialization, different WSDL generation, soap serialization Ramunas Urbonas ASP .Net Web Services 1 07-27-2004 09:57 PM



Advertisments