Velocity Reviews

Velocity Reviews (http://www.velocityreviews.com/forums/index.php)
-   Java (http://www.velocityreviews.com/forums/f30-java.html)
-   -   Re: multithreaded cache? (http://www.velocityreviews.com/forums/t946313-re-multithreaded-cache.html)

Robert Klemme 05-17-2012 09:54 AM

Re: multithreaded cache?
 
On 05/15/2012 11:14 AM, bugbear wrote:
> However, if the underlying function is slow
> and/or resource hungry (consider cacheing
> a ray traced image!) many threads can
> end up calling the real function (second
> and subsequent threads to the first get a miss
> during the first threads call to the underlying function).
>
> "obviously..." what I want is for only
> the thread that FIRST has a cache miss
> calls the underlying function, whilst other
> threads (for the same key) wait.


I provide a variant of Silvio's, Eric's and Daniel's solution which
should yield higher throughput because it works without read write
locking. You can find it as gist in case the code is garbled in the
newsgroup posting:
https://gist.github.com/2717818

Kind regards

robert


The code (untested):

package clj;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

/**
* The cache works with as few locking as possible. Lookup is done in
two steps
* on cache miss:
* <ol>
* <li>On a cache miss a retriever is inserted into the cache which
will obtain
* the value synchronized from a {@link Calculator}.</li>
* <li>Once calculation has finished a simple lock free reference to
the value
* replaces the retriever in the cache and the value is returned.</li>
* </ol>
*
* @author robert klemme
*
* @param <K>
* key type
* @param <V>
* value type
*/
public final class LazyCache<K, V> {
/**
* Calculate values from given keys.
*
* @param <K>
* key type
* @param <V>
* value type
*/
public interface Calculator<K, V> {
V get(K key);
}

/**
* Obtain a value.
*
* @param <V>
* value type.
*/
private interface Reference<V> {
V get();
}

/**
* Stupid simple reference which only hands out a fixed value all
the time
* without synchronization.
*
* @param <V>
* value type.
*/
private static final class Ref<V> implements Reference<V> {
private final V val;

public Ref(V val) {
this.val = val;
}

@Override
public V get() {
return val;
}
}

/** Mapping from keys to objects which yield values. */
private final ConcurrentMap<K, Reference<V>> map = new
ConcurrentHashMap<K, Reference<V>>();

/** User provided. */
private final Calculator<K, V> calc;

/**
* Create a cache.
*
* @param calc
* user must provide a reasonable implementation, not
* <code>null</code>.
*/
public LazyCache(final Calculator<K, V> calc) {
if (calc == null)
throw new NullPointerException();
this.calc = calc;
}

/**
* Get a value from the cache. The value might have to be calculated.
*
* @param key
* lookup key.
* @return value, might even be <code>null</code> depending on
algorithm.
*/
public V get(final K key) {
Reference<V> ref = map.get(key);

if (ref == null) {
// miss
ref = new Reference<V>() {
@Override
public synchronized V get() {
final V val = calc.get(key);
// next time lock free access:
Reference<V> x = map.put(key, new Ref<V>(val));
assert x == this;
return val;
}
};

final Reference<V> old = map.putIfAbsent(key, ref);

if (old != null)
ref = old; // someone else was faster
}

return ref.get();
}
}

Robert Klemme 05-17-2012 10:06 AM

Re: multithreaded cache?
 
On 05/17/2012 11:54 AM, Robert Klemme wrote:
> On 05/15/2012 11:14 AM, bugbear wrote:
>> However, if the underlying function is slow
>> and/or resource hungry (consider cacheing
>> a ray traced image!) many threads can
>> end up calling the real function (second
>> and subsequent threads to the first get a miss
>> during the first threads call to the underlying function).
>>
>> "obviously..." what I want is for only
>> the thread that FIRST has a cache miss
>> calls the underlying function, whilst other
>> threads (for the same key) wait.

>
> I provide a variant of Silvio's, Eric's and Daniel's solution which
> should yield higher throughput because it works without read write
> locking. You can find it as gist in case the code is garbled in the
> newsgroup posting:
> https://gist.github.com/2717818


There was one important detail missing. This is the corrected code
(gist is updated as well):

package clj;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

/**
* The cache works with as few locking as possible. Lookup is done in
two steps
* on cache miss:
* <ol>
* <li>On a cache miss a retriever is inserted into the cache which
will obtain
* the value synchronized from a {@link Calculator}.</li>
* <li>Once calculation has finished a simple lock free reference to
the value
* replaces the retriever in the cache and the value is returned.</li>
* </ol>
*
* @author robert klemme
*
* @param <K>
* key type
* @param <V>
* value type
*/
public final class LazyCache<K, V> {
/**
* Calculate values from given keys.
*
* @param <K>
* key type
* @param <V>
* value type
*/
public interface Calculator<K, V> {
V get(K key);
}

/**
* Obtain a value.
*
* @param <V>
* value type.
*/
private interface Reference<V> {
V get();
}

/**
* Stupid simple reference which only hands out a fixed value all
the time
* without synchronization.
*
* @param <V>
* value type.
*/
private static final class Ref<V> implements Reference<V> {
private final V val;

public Ref(V val) {
this.val = val;
}

@Override
public V get() {
return val;
}
}

/** Mapping from keys to objects which yield values. */
private final ConcurrentMap<K, Reference<V>> map = new
ConcurrentHashMap<K, Reference<V>>();

/** User provided. */
private final Calculator<K, V> calc;

/**
* Create a cache.
*
* @param calc
* user must provide a reasonable implementation, not
* <code>null</code>.
*/
public LazyCache(final Calculator<K, V> calc) {
if (calc == null)
throw new NullPointerException();
this.calc = calc;
}

/**
* Get a value from the cache. The value might have to be calculated.
*
* @param key
* lookup key.
* @return value, might even be <code>null</code> depending on
algorithm.
*/
public V get(final K key) {
Reference<V> ref = map.get(key);

if (ref == null) {
// miss
ref = new Reference<V>() {
private V val;
private boolean here = false; // superfluous but explicit

@Override
public synchronized V get() {
if (!here) {
val = calc.get(key);
here = true;
// next time lock free access:
Reference<V> x = map.put(key, new Ref<V>(val));
assert x == this;
}

return val;
}
};

final Reference<V> old = map.putIfAbsent(key, ref);

if (old != null)
ref = old; // someone else was faster
}

return ref.get();
}
}

Daniel Pitts 05-17-2012 04:51 PM

Re: multithreaded cache?
 
On 5/17/12 3:06 AM, Robert Klemme wrote:
> On 05/17/2012 11:54 AM, Robert Klemme wrote:
>> On 05/15/2012 11:14 AM, bugbear wrote:
>>> However, if the underlying function is slow
>>> and/or resource hungry (consider cacheing
>>> a ray traced image!) many threads can
>>> end up calling the real function (second
>>> and subsequent threads to the first get a miss
>>> during the first threads call to the underlying function).
>>>
>>> "obviously..." what I want is for only
>>> the thread that FIRST has a cache miss
>>> calls the underlying function, whilst other
>>> threads (for the same key) wait.

>>
>> I provide a variant of Silvio's, Eric's and Daniel's solution which
>> should yield higher throughput because it works without read write
>> locking. You can find it as gist in case the code is garbled in the
>> newsgroup posting:
>> https://gist.github.com/2717818

>
> There was one important detail missing. This is the corrected code (gist
> is updated as well):

[snip]
> if (!here) {
> val = calc.get(key);
> here = true;
> // next time lock free access:
> Reference<V> x = map.put(key, new Ref<V>(val));
> assert x == this;
> }

This assert (while true) is unnecessary and potentially harmful.
Nothing becomes inconsistent if the value you replace is not the
calculating reference. However, your assert may break at runtime if
something else changes about this code. Assert should be used to
validate consistency.

I like this class, though I have a couple of style comments.

I personally would name "Ref" as "ConstantReference", and your anonymous
Reference implementation I would make a private static class
CalculatingReference.

In my own personal library of utilities, I actually have two interfaces
"Factory<T>" with a "T create()" method and "ParameterizedFactory<T,P>"
with a "T create(P parameter)". Including a few interesting
implementations of those. In my case, I would probably implement this
cache as a ParameterizedFactory impl that takes another
ParameterizedFactory as the "calc" field.



Robert Klemme 05-17-2012 06:48 PM

Re: multithreaded cache?
 
On 17.05.2012 18:51, Daniel Pitts wrote:
> On 5/17/12 3:06 AM, Robert Klemme wrote:
>> On 05/17/2012 11:54 AM, Robert Klemme wrote:
>>> On 05/15/2012 11:14 AM, bugbear wrote:
>>>> However, if the underlying function is slow
>>>> and/or resource hungry (consider cacheing
>>>> a ray traced image!) many threads can
>>>> end up calling the real function (second
>>>> and subsequent threads to the first get a miss
>>>> during the first threads call to the underlying function).
>>>>
>>>> "obviously..." what I want is for only
>>>> the thread that FIRST has a cache miss
>>>> calls the underlying function, whilst other
>>>> threads (for the same key) wait.
>>>
>>> I provide a variant of Silvio's, Eric's and Daniel's solution which
>>> should yield higher throughput because it works without read write
>>> locking. You can find it as gist in case the code is garbled in the
>>> newsgroup posting:
>>> https://gist.github.com/2717818

>>
>> There was one important detail missing. This is the corrected code (gist
>> is updated as well):

> [snip]
>> if (!here) {
>> val = calc.get(key);
>> here = true;
>> // next time lock free access:
>> Reference<V> x = map.put(key, new Ref<V>(val));
>> assert x == this;
>> }

> This assert (while true) is unnecessary and potentially harmful. Nothing
> becomes inconsistent if the value you replace is not the calculating
> reference.


The algorithm works under the assumption that the first element placed
into the map is the one doing the calculation and only after that the
constant reference is inserted. If that is not the case (e.g. because
another calculator is put into the map) then obviously something must be
wrong and we might even have more than one lengthy calculation of the
cached value. This assumption is checked with the assert which also
serves as documentation.

> However, your assert may break at runtime if something else
> changes about this code. Assert should be used to validate consistency.


Well, if the logic is changed then of course asserts need to change,
too. There is nothing strange or harmful about that. With your
argument no asserts would make sense because they would need to be
changed if a class is changed. Note that internal class invariants can
change without changing the observable invariant. So even though a
class "looks" the same and behaves the same from the outside asserts
might have to be changed if internal logic of the class changes.

> I like this class, though I have a couple of style comments.


Thank you!

> I personally would name "Ref" as "ConstantReference",


Well, I was lazy typing and the class isn't public anyway. This is not
production code but an illustrating example for cljp.

> and your anonymous
> Reference implementation I would make a private static class
> CalculatingReference.


Why? You would need to either pass in a reference to LazyCache or pass
map and calc - plus declaring a constructor and type parameters. In
this case I don't really see the benefit.

> In my own personal library of utilities, I actually have two interfaces
> "Factory<T>" with a "T create()" method and "ParameterizedFactory<T,P>"
> with a "T create(P parameter)". Including a few interesting
> implementations of those. In my case, I would probably implement this
> cache as a ParameterizedFactory impl that takes another
> ParameterizedFactory as the "calc" field.


That could be done. But then method naming could be considered
misleading because the create() of the cache isn't really a create in
all cases.

Kind regards

robert

--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/

Silvio Bierman 05-18-2012 06:42 PM

Re: multithreaded cache?
 
On 05/17/2012 11:54 AM, Robert Klemme wrote:
> On 05/15/2012 11:14 AM, bugbear wrote:
>> However, if the underlying function is slow
>> and/or resource hungry (consider cacheing
>> a ray traced image!) many threads can
>> end up calling the real function (second
>> and subsequent threads to the first get a miss
>> during the first threads call to the underlying function).
>>
>> "obviously..." what I want is for only
>> the thread that FIRST has a cache miss
>> calls the underlying function, whilst other
>> threads (for the same key) wait.

>
> I provide a variant of Silvio's, Eric's and Daniel's solution which
> should yield higher throughput because it works without read write
> locking. You can find it as gist in case the code is garbled in the
> newsgroup posting:
> https://gist.github.com/2717818
>
> Kind regards
>
> robert
>
>
> The code (untested):
>
> package clj;
>
> import java.util.concurrent.ConcurrentHashMap;
> import java.util.concurrent.ConcurrentMap;
>
> /**
> * The cache works with as few locking as possible. Lookup is done in two
> steps
> * on cache miss:
> * <ol>
> * <li>On a cache miss a retriever is inserted into the cache which will
> obtain
> * the value synchronized from a {@link Calculator}.</li>
> * <li>Once calculation has finished a simple lock free reference to the
> value
> * replaces the retriever in the cache and the value is returned.</li>
> * </ol>
> *
> * @author robert klemme
> *
> * @param <K>
> * key type
> * @param <V>
> * value type
> */
> public final class LazyCache<K, V> {
> /**
> * Calculate values from given keys.
> *
> * @param <K>
> * key type
> * @param <V>
> * value type
> */
> public interface Calculator<K, V> {
> V get(K key);
> }
>
> /**
> * Obtain a value.
> *
> * @param <V>
> * value type.
> */
> private interface Reference<V> {
> V get();
> }
>
> /**
> * Stupid simple reference which only hands out a fixed value all the time
> * without synchronization.
> *
> * @param <V>
> * value type.
> */
> private static final class Ref<V> implements Reference<V> {
> private final V val;
>
> public Ref(V val) {
> this.val = val;
> }
>
> @Override
> public V get() {
> return val;
> }
> }
>
> /** Mapping from keys to objects which yield values. */
> private final ConcurrentMap<K, Reference<V>> map = new
> ConcurrentHashMap<K, Reference<V>>();
>
> /** User provided. */
> private final Calculator<K, V> calc;
>
> /**
> * Create a cache.
> *
> * @param calc
> * user must provide a reasonable implementation, not
> * <code>null</code>.
> */
> public LazyCache(final Calculator<K, V> calc) {
> if (calc == null)
> throw new NullPointerException();
> this.calc = calc;
> }
>
> /**
> * Get a value from the cache. The value might have to be calculated.
> *
> * @param key
> * lookup key.
> * @return value, might even be <code>null</code> depending on algorithm.
> */
> public V get(final K key) {
> Reference<V> ref = map.get(key);
>
> if (ref == null) {
> // miss
> ref = new Reference<V>() {
> @Override
> public synchronized V get() {
> final V val = calc.get(key);
> // next time lock free access:
> Reference<V> x = map.put(key, new Ref<V>(val));
> assert x == this;
> return val;
> }
> };
>
> final Reference<V> old = map.putIfAbsent(key, ref);
>
> if (old != null)
> ref = old; // someone else was faster
> }
>
> return ref.get();
> }
> }



I think you have as many locks as I suggested (being one)? My initial
implementations of something like this used a plain map with an extra
lock but later cases used the by then available ConcurrentHashMap as
well, making one lock redundant.

Silvio

Robert Klemme 05-18-2012 09:45 PM

Re: multithreaded cache?
 
On 18.05.2012 20:42, Silvio Bierman wrote:
> On 05/17/2012 11:54 AM, Robert Klemme wrote:


>> I provide a variant of Silvio's, Eric's and Daniel's solution which
>> should yield higher throughput because it works without read write
>> locking. You can find it as gist in case the code is garbled in the
>> newsgroup posting:
>> https://gist.github.com/2717818


> I think you have as many locks as I suggested (being one)? My initial
> implementations of something like this used a plain map with an extra
> lock but later cases used the by then available ConcurrentHashMap as
> well, making one lock redundant.


You didn't show it here, did you? I can's seem to find it in the
thread. Note that CHM does also do synchronization. I am not sure from
your statement what exact locking scheme you apply. There does seem to
be one difference though: I my version the second lock goes away after
the value has been computed so there is only the sync of CHM left.

Kind regards

robert

--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/

Eric Sosman 05-18-2012 10:31 PM

Re: multithreaded cache?
 
On 5/18/2012 5:45 PM, Robert Klemme wrote:
> On 18.05.2012 20:42, Silvio Bierman wrote:
>> On 05/17/2012 11:54 AM, Robert Klemme wrote:

>
>>> I provide a variant of Silvio's, Eric's and Daniel's solution which
>>> should yield higher throughput because it works without read write
>>> locking. You can find it as gist in case the code is garbled in the
>>> newsgroup posting:
>>> https://gist.github.com/2717818

>
>> I think you have as many locks as I suggested (being one)? My initial
>> implementations of something like this used a plain map with an extra
>> lock but later cases used the by then available ConcurrentHashMap as
>> well, making one lock redundant.

>
> You didn't show it here, did you? I can's seem to find it in the thread.
> Note that CHM does also do synchronization. I am not sure from your
> statement what exact locking scheme you apply. There does seem to be one
> difference though: I my version the second lock goes away after the
> value has been computed so there is only the sync of CHM left.


It seems to me that if N threads query the same key at about
the same time, they may all miss the map and go off to perform
the slow computation. If "slow" is large compared to the cost of
a lock-release pair (and if it weren't, why cache?), the tradeoff
seems suspect.

Also, different threads may wind up using different value
instances. If the cache is purely a convenience for a value-only
object that may be all right, but it's not all right if the values
are supposed to be singletons.

Finally, there's more than a whiff of the double-checked locking
antipattern about what you're doing with the `here' flag. I'm not
absolutely sure what you've got is in fact DCL (hence, wrong), but
I'm also not absolutely sure it's DCL-free. Before using the pattern
in any important way, I'd want to check with a major-league guru,
just as "due diligence."

--
Eric Sosman
esosman@ieee-dot-org.invalid

Daniel Pitts 05-19-2012 05:15 AM

Re: multithreaded cache?
 
On 5/18/12 3:31 PM, Eric Sosman wrote:
> On 5/18/2012 5:45 PM, Robert Klemme wrote:
>> On 18.05.2012 20:42, Silvio Bierman wrote:
>>> On 05/17/2012 11:54 AM, Robert Klemme wrote:

>>
>>>> I provide a variant of Silvio's, Eric's and Daniel's solution which
>>>> should yield higher throughput because it works without read write
>>>> locking. You can find it as gist in case the code is garbled in the
>>>> newsgroup posting:
>>>> https://gist.github.com/2717818

>>
>>> I think you have as many locks as I suggested (being one)? My initial
>>> implementations of something like this used a plain map with an extra
>>> lock but later cases used the by then available ConcurrentHashMap as
>>> well, making one lock redundant.

>>
>> You didn't show it here, did you? I can's seem to find it in the thread.
>> Note that CHM does also do synchronization. I am not sure from your
>> statement what exact locking scheme you apply. There does seem to be one
>> difference though: I my version the second lock goes away after the
>> value has been computed so there is only the sync of CHM left.

>
> It seems to me that if N threads query the same key at about
> the same time, they may all miss the map and go off to perform
> the slow computation.

Nope, they will all create a "Reference" object that *can* perform the
calculation, however because the method uses "putIfAbsent", only the
first calculating "Reference" object is actually ever used.

After that, the Reference.get() method will block until that particular
calculation is complete. Once that calculation is done, the calculating
Reference will replace itself with a constant Reference for future
accesses to use.
> If "slow" is large compared to the cost of
> a lock-release pair (and if it weren't, why cache?), the tradeoff
> seems suspect.

It would be suspect, but there isn't a tradeoff here. All threads lock
until one value is calculated, basically coalescing the requests for
that value.
>
> Also, different threads may wind up using different value
> instances. If the cache is purely a convenience for a value-only
> object that may be all right, but it's not all right if the values
> are supposed to be singletons.

Again, invalid if you recognize what is actually happening.
>
> Finally, there's more than a whiff of the double-checked locking
> antipattern about what you're doing with the `here' flag. I'm not
> absolutely sure what you've got is in fact DCL (hence, wrong), but
> I'm also not absolutely sure it's DCL-free. Before using the pattern
> in any important way, I'd want to check with a major-league guru,
> just as "due diligence."
>


"double-checked" locking is only broken if you don't have the
appropriate memory semantics enforced elsewhere. The reason the
"classical" double-check locking fails is because there could be a
data-race condition, where the reference is not null, but the object
state hasn't been flushed between threads.

In this case, the "first-check" of obtaining a value from the
ConcurrentMap causes happens-before relationships, and causes the object
state to flush.

Robert Klemme's code is correct, according to everything I've read in
JCIP and elsewhere.

Cheers,
Daniel.

Robert Klemme 05-19-2012 10:33 AM

Re: multithreaded cache?
 
On 19.05.2012 00:31, Eric Sosman wrote:
> On 5/18/2012 5:45 PM, Robert Klemme wrote:
>> On 18.05.2012 20:42, Silvio Bierman wrote:
>>> On 05/17/2012 11:54 AM, Robert Klemme wrote:

>>
>>>> I provide a variant of Silvio's, Eric's and Daniel's solution which
>>>> should yield higher throughput because it works without read write
>>>> locking. You can find it as gist in case the code is garbled in the
>>>> newsgroup posting:
>>>> https://gist.github.com/2717818

>>
>>> I think you have as many locks as I suggested (being one)? My initial
>>> implementations of something like this used a plain map with an extra
>>> lock but later cases used the by then available ConcurrentHashMap as
>>> well, making one lock redundant.

>>
>> You didn't show it here, did you? I can's seem to find it in the thread.
>> Note that CHM does also do synchronization. I am not sure from your
>> statement what exact locking scheme you apply. There does seem to be one
>> difference though: I my version the second lock goes away after the
>> value has been computed so there is only the sync of CHM left.

>
> It seems to me that if N threads query the same key at about
> the same time, they may all miss the map and go off to perform
> the slow computation. If "slow" is large compared to the cost of
> a lock-release pair (and if it weren't, why cache?), the tradeoff
> seems suspect.
>
> Also, different threads may wind up using different value
> instances. If the cache is purely a convenience for a value-only
> object that may be all right, but it's not all right if the values
> are supposed to be singletons.
>
> Finally, there's more than a whiff of the double-checked locking
> antipattern about what you're doing with the `here' flag. I'm not
> absolutely sure what you've got is in fact DCL (hence, wrong), but
> I'm also not absolutely sure it's DCL-free. Before using the pattern
> in any important way, I'd want to check with a major-league guru,
> just as "due diligence."


There is no double checked locking going on - neither in LazyCache
itself nor in the retriever instance.
http://en.wikipedia.org/wiki/Double-checked_locking

Daniel has explained all the details already. I'd just add that
creation of multiple retriever objects is the price you pay for
increased concurrency due to CHM. But only one of them is ever going to
be used per key. (Documenting this is one of the tasks of the assert
Daniel questioned earlier.)

I think rw-locking is an inferior technique compared to CHM in this case
because in case of cache miss you cannot escalate a r-lock to a w-lock
(to avoid deadlocks) and hence need to release the r-lock, acquire the
w-lock, *and check again*. In other words you have two more
synchronization points with memory barriers, scheduling effects etc.

In case of cache hit you might still suffer throughput because of thread
starvation caused by all lookups for values missing from the cache might
be blocked for indefinite time when using an unfair rw-lock. Using a
fair rw-lock on the other hand is more expensive and still has the
downside of a global lock which affects the complete range of keys.

CHM improves this by partitioning value space and only lock individual
partitions for atomic operations. These partitions are completely
independent from a locking perspective. That's the main advantage of CHM.

Kind regards

robert

--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/

Eric Sosman 05-19-2012 11:09 AM

Re: multithreaded cache?
 
On 5/19/2012 1:15 AM, Daniel Pitts wrote:
> On 5/18/12 3:31 PM, Eric Sosman wrote:
>> On 5/18/2012 5:45 PM, Robert Klemme wrote:
>>>[...]
>>> You didn't show it here, did you? I can's seem to find it in the thread.
>>> Note that CHM does also do synchronization. I am not sure from your
>>> statement what exact locking scheme you apply. There does seem to be one
>>> difference though: I my version the second lock goes away after the
>>> value has been computed so there is only the sync of CHM left.

>>
>> It seems to me that if N threads query the same key at about
>> the same time, they may all miss the map and go off to perform
>> the slow computation.

> Nope, they will all create a "Reference" object that *can* perform the
> calculation, however because the method uses "putIfAbsent", only the
> first calculating "Reference" object is actually ever used.
> [...]


Re-reading, I think you're right. "Never mind," as Emily
Litella used to say.

--
Eric Sosman
esosman@ieee-dot-org.invalid


All times are GMT. The time now is 05:14 AM.

Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc.