Quantcast
Channel: VMware Communities : All Content - vFabric GemFire [ARCHIVED]
Viewing all 990 articles
Browse latest View live

Unable to save VSD template

$
0
0

I unable to save a VSD template. Whenever, I try doing that I get message like this:

 

vsd1.png

 

I am using VSD on Windows 7 and running it in compatibility mode for XP.


Native client compatibility with 64 bit 2008 Windows Server

$
0
0

A client is reporting problems with 32 bit .NET library. Any help appreciated.

 

 

We are getting the below error when we deployed the CCP site (ASP.Net) on Windows Server 2008 R2 (64-bit).

 

Error: Could not load file or assembly 'GemStone.GemFire.Cache, Version=3.0.0.2, Culture=neutral, PublicKeyToken=126e6338d9f55e0c' or one of its dependencies. An attempt was made to load a program with an incorrect format.

 

It seems we need to deploy the 64-bit Gemfire Native client.  I found some GemFire Native Client 64-bit available in VMware.

 

Would appreciate if you advise the right Gemfire Native client for .Net which can be installed on 64-bit PC's.

How to migrate diskstores when changing number of buckets for partitioned persistent region

$
0
0

Is there a way or diskstore utility to migrate existing diskstores when you want to change the total number of buckets and/or add a new partition to the region?

GemFire 6.6.3 cacheserver log wouldn't roll

$
0
0

Hi

 

As per documentation here

http://pubs.vmware.com/vfabric5/topic/com.vmware.vfabric.gemfire.6.6/managing/logging/setting_up_logging.html

 

I am configuring logging for 2 member cacheserver node cluster.

 

I see following settings in log output of first server

 

  log-disk-space-limit="0"
  log-file="C:\dev\AS\REFERENCES\Java\deploy\scripts\..\logs\server1.log"
  log-file-size-limit="10"
  log-level="config"

 

likewise for second member

 

  log-disk-space-limit="0"
  log-file="C:\dev\AS\REFERENCES\Java\deploy\scripts\..\logs\server2.log"
  log-file-size-limit="10"
  log-level="config"

 

I am passing log-file as commandline argument to separate the log file while still maintaining single properties file.

 

I see that it wouldn't roll into new file, even after size of server1.log has grown well above 10mb. In fact it doens't roll at all

It does create meta-server1-01.log as per documentation.

 

this is in GemFire 6.6.3

 

Am I missing something

 

thanks

Client missed updates from cache - how do I debug?

$
0
0

We have several GemFire clients that maintain a local cache and register interest in all keys from a server region:

 

<gfe:client-region id="myRegion" data-policy="NORMAL"

cache-ref="myCache" key-constraint="java.lang.String"
value-constraint="com.mycompany.myclass">
<gfe:key-interest>
<bean id="key" class="java.lang.String">
<constructor-arg value="ALL_KEYS" />
</bean>
</gfe:key-interest>

 

This has been working well but one day last week, one of the client caches missed a number of updates.  We couldn't see anything in the server or client logs to indicate a problem.  This client application was restarted that night and has been getting it's updates ever since.

 

I'm struggling with how to debug this type of problem.  Is there something in the statistics that would give a clue as to what happened?  I have a feeling there isn't much that can be done to look back and see what happened, but maybe there's some sort of a trace or event listener that could catch a future occurrance?

 

Any suggestions?

Graceful shutdown of member

$
0
0

What is the proper way to gracefully shut down a cache-server member using the API? Is the procedure any different if I use Spring to manage my Gemfire cache instance?

Tom

Migration to Gemfire: Alternative to Hibernate Criteria API

$
0
0

Hi

 

We want to migrate an existing web application, consist of Spring and Hibernate, to Gemfire. Currently we are using RDBMS for persistence but we want this to be replaced with Gemfire data grid.

 

Now we need to modify/recreate DAO layer so that it can access data from Gemfire cache. But problem is that DAO layer extensively uses Hibernate criteria API for data access. Recreating the access logic in Gemfire API would be very difficult and also time consuming.

 

So questions are :

 

1. Does Gemfire provide Criteria API ?

      ---- probably answer is no since nowhere in the documentation, criteria support is mentioned.

 

2. Is there any third party library that supports hibernate like criteria API and converts it into Gemfire API/OQL ?

     ----- GORM for gemfire seems to be the solution but i think that project is no more supported

 

Any help in this matter is highly appreciated.

 

Thanks

Mahesh Kumar

Not getting callbacks on CacheListener

$
0
0

We have a couple of nodes in a disctributed cache all servers with replicated regions.

When we add data it flows to the other nodes, all works fine.

 

I tried to add CacheListener on the node that inserts the data into the cache, but the only call I get is for afterRegionCreate()

 

What I am doing wrong?

 

<?xml version="1.0"?>
<!DOCTYPE cache PUBLIC "-//GemStone Systems, Inc.//GemFire Declarative Caching 6.6//EN" "http://www.gemstone.com/dtd/cache6_6.dtd">
<cache>
<region name="ABC">
<region-attributes />
<region name="USER">
<region-attributes />
<region name="ACCESS">
<region-attributes data-policy="replicate" scope="distributed-ack">
<cache-listener>
<class-name>cache.CacheUpdateListener</class-name>
</cache-listener>
</region-attributes>
</region>
</region>
</region>
</cache>

Large amounts of overflow files

$
0
0

Hi

 

We've recently introduced disk overflow for our gemfire cluster and I'm at a loss right now as to how our overflow disk consumption has gotten so high.

 

Environment :-

 

Gemfire 6.5.1.42

Read Hat Linux Enterprise (Linux 2.6.18-164.10.1.el5 #1 SMP Wed Dec 30 18:35:28 EST 2009 x86_64 x86_64 x86_64 GNU/Linux)

Java 1.6.0_24=b07

 

Cluster :-

 

10 JVMs running at 2.5gb

 

 

Gemfire Config :-

 

<disk-store name="diRegionDiskStore" allow-force-compaction="true" compaction-threshold="25">

 

<resource-manager critical-heap-percentage="90" eviction-heap-percentage="80"/>

 

<eviction-attributes>
            <lru-heap-percentage action="overflow-to-disk"/>
</eviction-attributes>

 

 

Scenario :-

 

With the eviction settings we have roughly 1.7gb of memory before we should start overflowing to disk.

 

We've noticed that our overflow per node is using around 3gb, so we have 3 crf files at about 1gb each.  We don't think this is possible based on our current data loads.  Our JVMs are at the eviction level pretty much constantly so that means we have 1.7gb in memory and 3gb on disk for each node.

 

For example...

 

-rw-r--r-- 1 xxxxx  xxxxx  2048 Jan 25 13:21 BACKUPdiRegionDiskStore.if

-rw-r--r-- 1  xxxxx   xxxxx  0 Mar 27 09:01 DRLK_IFdiRegionDiskStore.lk

-rw-rw-r-- 1  xxxxx   xxxxx   1073741824 Apr  4 11:43 OVERFLOWdiRegionDiskStore_12.crf

-rw-rw-r-- 1  xxxxx   xxxxx   1073741824 Apr  4 16:22 OVERFLOWdiRegionDiskStore_17.crf

-rw-rw-r-- 1  xxxxx   xxxxx  1073741824 Mar 28 22:03 OVERFLOWdiRegionDiskStore_1.crf

We've tried running on-line compaction and off-line individual compation but the crf files do not shrink.

 

In fact when we try to run off-line compaction we get the following error....

 

1020 xxx@xxx bin> ./gemfire -debug compact-disk-store diRegionDiskStore /local/0/sw/xxxxx/overflow/ldnuat/diRegion/data-server10

ERROR: Operation "compact-disk-store" failed because:  disk-store=diRegionDiskStore: java.lang.NullPointerException.

com.gemstone.gemfire.GemFireIOException:  disk-store=diRegionDiskStore: java.lang.NullPointerException

        at com.gemstone.gemfire.internal.SystemAdmin.compactDiskStore(SystemAdmin.java:404)

        at com.gemstone.gemfire.internal.SystemAdmin.invoke(SystemAdmin.java:1965)

        at com.gemstone.gemfire.internal.SystemAdmin.main(SystemAdmin.java:1772)

Caused by: java.lang.NullPointerException

        at com.gemstone.gemfire.internal.cache.DiskStoreImpl.offlineCompact(DiskStoreImpl.java:4109)

        at com.gemstone.gemfire.internal.cache.DiskStoreImpl.offlineCompact(DiskStoreImpl.java:4416)

        at com.gemstone.gemfire.internal.SystemAdmin.compactDiskStore(SystemAdmin.java:402)

 

Question :-

 

We do not believe that the overflow file consumption is an accurate representation of what our actual data load is.  Also our production machines have very small disk sizes 78gb.  We're currently consuming 30gb of disk and we're finding this is continuing to rise.  It has caused production issues for us previously when overflow has completely exhausted our disk.

 

We believe we may be cycling data into overflow prematurly due to garbage in the old generation pushing us above eviction, however even if this is the case and eventualy every value is overflowed this should still not result in 3gb and rising overflow files.


Problem with 2 servers.

$
0
0

I have a problem with my server's configuration. In this i have two servers (A,B), and the prolems is the next , I start server A is ok , then i start server B and this is OK, then I down the server A , and then i down the server B.

 

The problem now is, if i start the server A, this waits to the server B and i dont want this to happen. I want that the server A start even if the server B is down.

 

How I Can Configure the server for solve this?

Server Configuration

$
0
0

How I can configure one cluster with two node and two servers in each node?

 

Thanks

FunctionService withFilter detail understanding

$
0
0

I have three distributes servers. I am using FunctionService. It has a method FuncitonService.withFilters() in which i pass a key set of objects. It works in a broadcasting manner, as in , Same keyset values are detected by all the three servers.


Is there any way by which i can pass different keyset of objects for processing to the three different servers?

Does the user has a control to pass a certain keyset to a certain distributed server as it is with other Big Data produts. For eg, i want 1 to 30000 goes to distributes server 1 and30001 to 65000 to distrubutes server 3 and 35001 to 100,000 to Distributed server 3.

 

Is this thing achieveble by FunctionService? If not what can be its repalcement?

Setting expiration time when adding entry to cache

$
0
0

An application running as a GemFire client would like to control when entries expire.  At the time they are adding an entry to the cache, they know when they want that particular entry to expire based on some business rules.

 

We aren't seeing how this can be done.  The normal time-based expiration methods won't help.  It seems like some form of custom expiry would need to be used.  Would the client application need to store their desired expiration time somewhere in the cache entry so the custom expiry code could read it?  And if that's the case, would that be terribly inefficient?  I can't seem to find any details on how expiration works.

 

Any ideas would be appreciated.  What we are looking for is something similar to what we have with messaging systems like MQ/JMS.  With those types of systems, you can specify the expiration period when you create the message.

Limit TSL negociation - GEMFIRE

$
0
0

Hi, I have a security issue, is about how to limits open ports in my GEMFIRE servers, so protect them form DoS attack. I have 2 servers with agent, locater an cacheserver in both.

timeout setting for gefmire.

$
0
0

Hi,

 

We have a central gemfire cache running on a server,   and a .NET client with no local cache who access the server .   because sometimes the client need fetch a large amount of data from server,   we configurated a big ReadTimeout value to avoid timeout execption.  but with that big value setting ,   even the remote server is reallly down,     Client side will also wait for long time until it fails,    Can you advise how to set it to return immediatly if the server is not available .

 

Thanks

Yao


"IllegalStateException: Unknown pdx type=2" when all nodes get restarted while client stays up

$
0
0

Hi

 

I am using a program based on the native client 7.0.1 (.net) to execute functions on a GemFire cluster. I tried to verify a couple of failure scenarios and all works fine except for one scenario, where the minimal case would look like this:

 

  1. Start GF1 (gemfire node 1)
  2. Start GF2
  3. Start Client
  4. Let Client execute a function on GF cluster
  5. Kill both GF1 and GF2 (so there's a time period when the GemFire cluster is completely down)
  6. Start GF1
  7. Start GF2
  8. Let Client execute a function on GF cluster

 

It's at step 8 when I get the exception.

Btw this is an HA function (which is why I use 2 servers even for the minimal case), I have not tried a non-HA function.

 

I tried several other things. When I e.g. omit step 4 above, everything works fine. Or, if instead of step 8 I do a region.Put on the same region, using as value an instance of the same class as the one I'm passing in to the function as a parameter (and which I assume is causing the exception in GF), it works just fine. And even after a successful region.Put (i.e. proof of a successful serialization/deserialization of the class in question), a function execution will still fail.

 

However, if I make sure that at least one GemFire node stays up at all times, like for example:

 

  1. Start GF1 (gemfire node 1)
  2. Start GF2
  3. Start Client
  4. Let Client execute a function on GF cluster
  5. Kill GF1
  6. Start GF1
  7. Kill GF2
  8. Start GF2
  9. Let Client execute a function on GF cluster

 

everything works flawlessly.

 

So here's an example of the exception pair I'm getting (it comes in pairs, one pair on each GemFire node):

[warning 2013/04/11 13:28:35.773 JST GF1 <ServerConnection on port 40401 Thread 0> tid=0x45] Server connection from [identity(myhostname(default_GemfireDS:4784:loner):2:GFNative_MGihOBTGfg:d
efault_GemfireDS,connection=1; port=59317]: Unexpected Exception
java.lang.IllegalStateException: Unknown pdx type=2
        at com.gemstone.gemfire.internal.InternalDataSerializer.readPdxSerializable(InternalDataSerializer.java:2976)
        at com.gemstone.gemfire.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2793)
        at com.gemstone.gemfire.DataSerializer.readObject(DataSerializer.java:3212)
        at com.gemstone.gemfire.DataSerializer.readArrayList(DataSerializer.java:2232)
        at com.gemstone.gemfire.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2687)
        at com.gemstone.gemfire.DataSerializer.readObject(DataSerializer.java:3212)
        at com.gemstone.gemfire.internal.util.BlobHelper.deserializeBlob(BlobHelper.java:81)
        at com.gemstone.gemfire.internal.cache.tier.sockets.CacheServerHelper.deserialize(CacheServerHelper.java:54)
        at com.gemstone.gemfire.internal.cache.tier.sockets.Part.getObject(Part.java:216)
        at com.gemstone.gemfire.internal.cache.tier.sockets.Part.getObject(Part.java:220)
        at com.gemstone.gemfire.internal.cache.tier.sockets.command.ExecuteRegionFunction66.cmdExecute(ExecuteRegionFunction66.java:90)
        at com.gemstone.gemfire.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:173)
        at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:809)
        at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:940)
        at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1189)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at com.gemstone.gemfire.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:532)
        at java.lang.Thread.run(Thread.java:722)
[warning 2013/04/11 13:28:35.781 JST GF1 <ServerConnection on port 40401 Thread 1> tid=0x49] Server connection from [identity(myhostname(default_GemfireDS:4784:loner):2:GFNative_MGihOBTGfg:d
efault_GemfireDS,connection=1; port=59319]: Unexpected Exception
java.lang.IllegalStateException: Unknown pdx type=2
        at com.gemstone.gemfire.internal.InternalDataSerializer.readPdxSerializable(InternalDataSerializer.java:2976)
        at com.gemstone.gemfire.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2793)
        at com.gemstone.gemfire.DataSerializer.readObject(DataSerializer.java:3212)
        at com.gemstone.gemfire.DataSerializer.readArrayList(DataSerializer.java:2232)
        at com.gemstone.gemfire.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2687)
        at com.gemstone.gemfire.DataSerializer.readObject(DataSerializer.java:3212)
        at com.gemstone.gemfire.internal.util.BlobHelper.deserializeBlob(BlobHelper.java:81)
        at com.gemstone.gemfire.internal.cache.tier.sockets.CacheServerHelper.deserialize(CacheServerHelper.java:54)
        at com.gemstone.gemfire.internal.cache.tier.sockets.Part.getObject(Part.java:216)
        at com.gemstone.gemfire.internal.cache.tier.sockets.Part.getObject(Part.java:220)
        at com.gemstone.gemfire.internal.cache.tier.sockets.command.ExecuteRegionFunction66.cmdExecute(ExecuteRegionFunction66.java:90)
        at com.gemstone.gemfire.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:173)
        at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:809)
        at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:940)
        at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1189)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at com.gemstone.gemfire.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:532)
        at java.lang.Thread.run(Thread.java:722)

 

 

The relevant parts of my server-side cache.xml:

 

 

<cache>    <cache-server port="40401"/>    <pdx>        <pdx-serializer>            <class-name>com.gemstone.gemfire.pdx.ReflectionBasedAutoSerializer</class-name>            <parameter name="classes">                <string>proto.server.data.QueueState,proto.server.data.Session,proto.data.UpstreamMessage,proto.data.DownstreamMessage</string>            </parameter>        </pdx-serializer>    </pdx>    <region name="Queue">        <region-attributes refid="PARTITION">            <partition-attributes redundant-copies="1">                <partition-resolver>                    <class-name>proto.server.partition.QueuePartitionResolver</class-name>                </partition-resolver>            </partition-attributes>        </region-attributes>    </region>

...

 

 

And the class that's being serialized in the function executions (and the region.Put) is proto.data.UpstreamMessage.


Any suggestions, workarounds etc. appreciated!

 

Cheers

Eugen

Confusion with locator installation

$
0
0

We have gemfire 7 packaged into tcServer instances with spring-data for as much as we can. Factory methods for the rest.

Multiple environments (dev/qa/prod/ types) on a single server.

 

We are differentiating by ip.

 

Ok, so we try to drop in a locator.

Here's a piece of configuration xml to bring up a locator.

 

<bean id="locator" class="com.gemstone.gemfire.distributed.Locator" factory-method="startLocatorAndDS">  <constructor-arg value="${locator.port}" />  <constructor-arg value="${locator.log-file}" />  <constructor-arg>  <bean class="java.net.InetAddress" factory-method="getByName">    <constructor-arg value="${locator.bind-address}" />  </bean>  </constructor-arg>  <constructor-arg ref="locatorProperties" />  <constructor-arg value="true" />  <constructor-arg value="true" />  <constructor-arg value="${locator.bind-address}" /></bean><!-- Properties file bean - required for the LocatorFactory which creates the locator --><bean id="locatorProperties" class="org.springframework.beans.factory.config.PropertiesFactoryBean" depends-on="placeholderConfig">  <property name="properties">    <props>      <prop key="locators">${locator.locators}</prop>      <prop key="bind-address">${locator.bind-address}</prop>      <prop key="mcast-port">${locator.mcast-port}</prop>      <prop key="log-file">${locator.log-file}</prop>      <prop key="log-level">${locator.log-level}</prop>      <prop key="license-data-management">${locator.license-data-management}</prop>    </props>  </property></bean>

 

There was a lot of trial and error went into this, so there may be some pieces that are superfluous.

It does however work.

 

 

Ok, here's the problem.

Machine has multiple network cards.

When we try to start this up, the locator is binding to the designated port, but on all addresses, so if we wanted (say) dev on 10.2.1.1 and the locator on10001 and qa on 10:2:1.2 so we are running two different grids, it doesn't work as desired.

The first one up is bound to the port everywhere, so that's all she wrote for the second.

 

Is this expected behaviour?

If not, which piece of the configuration did I miss that would allow this to work as described. I had some vague hope that the bind-address controlled this.

Is there a way to replicate data across site?

$
0
0

We have a multi-site implementation separated by WAN. We have one site up and running with all the data. The other site has same configuration as the first one except for the data. Is there a way to replicate all the data in all the regions on first site to second.

gfsh talking to locator

$
0
0

have gfsh on a server running an embedded locator inside a tcServer instance.

try to connect to the locator and run a command

get an error

 

gfsh>connect --locator=gemfire-locator-01.delta.na.rtdom.net[10083]
Connecting to Locator at [host=gemfire-locator-01.delta.na.rtdom.net, port=10083] ..
Connecting to Manager at [host=ny-java-51.na.rtdom.net, port=1099] ..
Successfully connected to: [host=ny-java-51.na.rtdom.net, port=1099]

 

gfsh>list members
Exception occurred. Command can not be processed as Command Service did not get initialized. Reason: Could not find Spring Shell library which is needed for CLI/gfsh in classpath. Internal support for CLI & gfsh is not enabled.

Thought this was because the spring-shell library wasn't in the server.

Added it and redeployed.

Same problem.

 

Creating and connecting to a Locator natively works just fine.

Anyone know if I can get it to work from within the tcServer instance, and if so what I missed?

Shell libraries at container level perhaps?

HTTP Session Module for tcServer and local caching

$
0
0

We are using a GemFire client/server configuration for storing HTTP session from tcServer.  For our initial tcServer applications, we ran the client as "PROXY" so no session data is actually stored on the client and the GemFire server always has the current copy of session.  That works well but now we've run into a case where we have a need to store a local copy of session in each client's memory (long story).

 

What we want to do is store session on the tcServer client, but also send updates to the server.  We are using sticky session, but it some cases a user will be bounced to another server.  In that case, we want the current copy of session to be available to the other server.  In this case, now two tcServer clients have the session and we would hope that any updates would be received by the GemFire server and any client caches that have that entry in their local cache.  We do this in other GemFire applications (not using the session module) by registering interest in keys and that seems to work well.

 

We can't seem to get this working with HTTP session.  We're trying to do this via CACHING_PROXY_HEAP_LRU, but it appears that updates to the local cache stay on the local cache and when we move to another server we get an old copy of the session.

 

Is there a way to get these session updates sent to any clients to have this session key in their local cache?  We've tried a number of different settings with no success.

 

This is tcServer 2.8.1 and GemFire 6.6.4.  Below are the client and server side region configurations.  Any insight that you can provide would be appreciated:

 

----------------------------------------------------
Client-side
----------------------------------------------------
client-cache.xml - from ACWT1402_NEW
<client-cache>
  <pool name="sessions" subscription-enabled="true">
    <locator host="acwt1461" port="10335"/>
  </pool>
  <region name="dww_sessions" refid="CACHING_PROXY_HEAP_LRU">
    <region-attributes>
      <subscription-attributes interest-policy="cache-content"/>
      <eviction-attributes>
        <lru-entry-count maximum="1000"/>
      </eviction-attributes>
    </region-attributes>
  </region>
</client-cache>
-----------------------------------------------------
Server-side
-----------------------------------------------------
server-cache.xml - ACWT1461 - SERVER2
  <cache-server port="40405"/>
  <disk-store name="DEFAULT" allow-force-compaction="true" max-oplog-size="50" >
    <disk-dirs>
      <disk-dir>d:\vFabric\GemFire-6.6.4\server2\default-diskstore</disk-dir>
    </disk-dirs>
  </disk-store>
  <pdx persistent="true"/>
 
  <region name="dww_sessions">
    <region-attributes enable-gateway="false" data-policy="replicate" scope="distributed-ack" statistics-enabled="true" >
      <entry-time-to-live>
        <expiration-attributes timeout="0" action="destroy">
          <custom-expiry>
            <class-name>com.gemstone.gemfire.modules.util.SessionCustomExpiry</class-name>
          </custom-expiry>
        </expiration-attributes>
      </entry-time-to-live>
      <subscription-attributes interest-policy="all"/>
      <eviction-attributes>
        <lru-memory-size maximum="200" action="overflow-to-disk"/>
      </eviction-attributes> 
    </region-attributes>
  </region>
</cache>
Viewing all 990 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>