Quantcast
Channel: VMware Communities : All Content - vFabric GemFire [ARCHIVED]
Viewing all 990 articles
Browse latest View live

Can existing disk stores be renamed?

$
0
0

Hi All

 

We want to upgrade to spring data gemfire ver 1.2.2/gemfire 6.5.1 to make use of the key/value constraint but it broke our existing disk store xml config (disk store been moved to the top level in the newer version).

 

We have existing disk files that share the same name as the region that we want to be migrated however with the new config the disk stores will be using a different id and thus create differently named disk store files.

 

Is there a way to modify the existing disk store name to the new name so that the existing persistent files can be migrated seamlessly.

 

Thanks


Spring + EclipseLink + Gemfire L2 Cache

$
0
0

Hi,

 

I want to set up Gemfire as L2 Cache  we are using  JPA with eclipseLink.

It seems that Gemfire was performed to use hibernate.

 

Any Is there any docs related to confim or solution or plug in ?

 

Thank you !

OQL to query nested Objects

$
0
0

My object structure is as below:

Class A

{

   public string key;

   public string fieldName;

   List<ArrayList<MyCustomObject>> myNewObject = new ArrayList<ArrayList<MyCustomObject>>();

}

 

Class MyCustomObject

{

public String key;

public String value

}

Now I want to write a OQL on this object, how do I write a query "key"= 1. 

Select ernfrom /ExampleRegion er, er.myNewObject ern

 

This will return ArrayList<MyCustomObject>

How do I write ern.items."key"= "mykeyToCompare"

 

Any solution to this issue.

Error Integrating Gemfire 7.0.1 with Hibernate 4.x as L2 cache

$
0
0

Hi,

 

I am trying to set up Hibernate 4.3.0 with Gemfire 7.0.1

 

1-

<prop key="hibernate.cache.region.factory_class">com.gemstone.gemfire.modules.hibernate.GemFireRegionFactory</prop>

 

2 - jars :

gemfire.jar

gemfire-modules-hibernate-7.0.1.jar

hibernate-core-4.3.0.Beta1.jar

hibernate-entitymanager-4.3.0.Beta1.jar

hibernate-jpa-2.1-api-1.0.0.Draft-16.jar

 

 

I got an error when I deploy :

 

INFO   | jvm 1    | 2013/04/26 17:28:12 | Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/webmvc-context.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: org/hibernate/cache/EntityRegion
............................................
INFO   | jvm 1    | 2013/04/26 17:28:12 | Caused by: java.lang.NoClassDefFoundError: org/hibernate/cache/EntityRegion

 

It seems to me  the same problem interfacing Hibernate 4 and EhCache according to : http://www.javacraft.org/2012/03/migrate-to-hibernate-4-ehcache.html

 

Any help will be appreciate

 

Thanks

Message still queued up at gemfire cache when native client GUI crashed but not exit.

$
0
0

Hi there.

 

Here's the issue we met.

we havet .net native GUI client, somehow our gui get closed but not exit totally, the registration keys on gemfire server still keep the connection, but no message could be consumed, that may means no acknowledge send back to server, so the messages keep queued up at server side, and we bounce server everyday, but seems the client hunging there and server still got queued up, and it eats our memory. And at backend I saw gemfire server try to terminate the unresponse client|(whic is no response over 60,000 ms), but always failed.

 

We try to enable the config param: remove-unresponsive-client=true, to expect the server remove the client connection, and free the queues, seems no change.

 

So anything else we can do? we can't make sure the client always exits normally, so we need mechnism to get rid of these abnormal connection, and free the memory.

 

thanks.

Keeping Gemfire in Sync with a Database

$
0
0

We are developing an application that makes use of deep object models that are being stored in Gemfire. Clients will access a REST service to perform cache operations instead of accessing the cache via the Gemfire client. We are also planning to have a database underneath Gemfire as the system of record making use of asynchronous write-behind to do the DB updates and inserts.

 

In this scenario it seems to be a non-trivial problem to guarantee that an insert or update into Gemfire will result in a successfull insert or update in the DB without setting up elaborate server-side validation (essentially the constraints to the Gemfire operation would have to match the DB operation constraints). We cannot bubble the DB insert or update success/failure back to the client without making the DB call synchronous to the Gemfire operation. This would obviously defeat the purpose of using Gemfire for low latency client operations.

 

We are curious as to how other Gemfire adopters who are using write-behind have solved the problem of keeping the DB in sync with the Gemfire data fabric?

What is the best practice for embedded map?

$
0
0

I want to move a cache (multi-level embeded map) to Gemfire, and want to get some suggestion, thanks.

 

The current cache is Map<Integer, Map<Integer, Map<Integer, Map<Boolean, MyObject>>>>

You can treat the for level as year, month, week, and day.

It is easy to get data at any level by current implementation.

 

I think out three solutions for Gemfire.

S1: Use String as key, and Map<Integer, Map<Integer, Map<Boolean, MyObject>>> as value.

It is simplest for implementation, but the data update have poor performance. One MyObject change will lead to whole value propagation (the map may include several million MyObject instances).

 

S2: Use Object(String, Integer, String, Boolean) as key, and MyObject as value. Use OQL to get each level data.

Can OQL be as fast as current implementation?

 

S3: Use dynamic region, but not sure if it is implementable?

Can dynamic region be created on dynamic parent retgion?                             

                                             ReplicatedRegion

                                                 /               \

                                               /                  \

                                       dynamicY2012    dynamicY2013

                                          /               \

                                         /                 \

                                     dynM1 ....       dynM12

                                        /

                                     dynW1

                                      /

                                    dynD1 dynD2

Durable client, reconnect and detecting interest

$
0
0

Hi,

 

Given the following scenario:

1. Durable client

2. Server side register interest

3. Durable client lost connectivity for more than the durable-client-timeout setting

4. Connectivity is restored and GemFire durable client automatically reconnects (GF client internally just reconnects)

 

What is the way(s) to detect that client durability was already expired on cache server?

 

Thank you


How to use the gfsecurity.properties ?

$
0
0

Hi, I have the next problem. I need include all the field for the security, security-* in a different properties that the gemfire.properties, I was reading that exist a gfsecurity.properties, but I don't know how configure that.

And if don't know if this feature is only avaiable in the Gemfire 7.0.
Is this true ?

Managing Open File Descriptors on GemFire Data Node Hosts

$
0
0


Open file descriptors (FD's) are an OS resource used for file handles and network connections (sockets). Given the distributed nature of GemFire deployments, it is often the case that GemFire data nodes (servers) have many socket connections open as they communicate with other servers in the same cluster, clients, or other GemFire clusters via WAN gateway connections. In such cases the default OS limit on the number of open file descriptors may be not high enough. The OS defaults can be quite low for today's needs and available resources. As is often the case, they simply have not been adjusted to reflect the new reality.

 

As a general guideline, we recommend setting the FD limit to 50000 or higher, depending on the estimated use. Nowadays even smartphones have enough RAM to support even higher numbers of file descriptors in use; Linux, for example, uses roughly 1KB of RAM per one open file descriptor for bookkeeping. So, the memory overhead for 50000 open file descriptors is less than 50MB. Given such a low overhead it is better to over-allocate than risk running out of file descriptors and bringing a production system to a stall.

 

Estimating the maximum number of FD's used by a GemFire Data Node

 

To estimate the maximum number of open file descriptors that may be in use by GemFire, take into account the following:

 

  • The number of concurrent connections to the server JVM (n1)
  • The maximum number of worker threads in the server JVM (maxThreads)
  • The number of peers this JVM will be connected to. So, for instance if the server cluster has 10 peer nodes and if conserve-sockets is set to false (for high performance) then, you will need (maxThreads x 10 x 2)  additional connections (n2)
  • The number of open files the JVM will be working with. This is generally not more than 20 (n3)

 

The max FD limit per JVM should be greater than: n1 + n2 + n3.

 

One way to easily determine the FD usage in a running environment is to monitor the GemFire/SQLFire FD usage statistic. You can view this statistic using VSD. For more information see the article File Descriptor Issue.

 

One other thing that complicates things here and should be taken into account is the fact that JVM itself can contribute to an increase of open file descriptors: Sun JVM uses internal per-thread selectors for blocking network I/O, and each one of those selectors uses three file descriptors. The JVM does not clean up these selectors and their file descriptors until garbage collection. Because of this, the number of these selectors and file descriptors can keep growing, and if GC does not happen often enough, the FD limit can be reached. Sun JVM uses sun.nio.ch.Util$SelectorWrapper$Closer objects to clean up these selectors and their file descriptors; if a heap histogram shows a large number of them, there will also be three times as many FD's in use, which can be verified using lsof. Upon GC, the number of Closer objects and file descriptors will go down.

 

Controlling the GemFire's use of FD's

 

In light of the above discussion, controlling GemFire's use of file descriptors may involve tuning JVM garbage collection, and GemFire settings that control the use of sockets and threads:

 

1. Garbage collection on the servers: Use CMS garbage collector, and CMSInitiatingOccupancyFraction set to the level than ensures a regular and timely GC cycle.

 

2. If using a GemFire version prior to 7.0.1, increase the gemfire.IDLE_THREAD_TIMEOUT Java system property on the servers (default=15000ms). GemFire 7.0.1 increased the default to 30 minutes, which should be high enough.

 

3. Increase the socket-lease-time GemFire property on the servers (default=60000 ms), or set to 0 to disable timeout altogether.

 

4. On the clients, Increase the client pool idle-timeout (default=5000 ms), or even turn it off by setting it to -1.

 

 

Modifying the Limit of Open File Descriptors on Linux

 

Limits can be checked using ulimit, like so:

 

     ulimit -n -S

 

for soft limit on open files per process, and:

 

     ulimit -n -H

 

for hard limit on open files per process. ulimit can be used to change the soft limit. Limits are configured in /etc/security/limits.conf, access to which requires superuser privileges.

Gem Fire - Shared-Nothing Disk Persistence

$
0
0

Hi,

Gem Fire's Share Nothing Disk Persistence, how good it is compared to NAS that Gem Fire uses?

Gem Fire docs says, it outperforms NAS. Not clear about it.

Please clarify.

 

Thanks,

Chakri

Problem with gfsecurity in agent

$
0
0

Hi, I was trying to divide the configuration props and security prop using a gfsecurity.properties. But i found a problem in the agent.

 

 

I have a agent.properties with the next values:

 

 

 

# logging properties

log-level

log-disk-space-limit

log-file-size-limit

 

 

# gemfire cluster properties

locators

mcast-port

http-enabled

http-port

rmi-port

 

 

# license properties

license-application-cache

license-data-management

 

 

## security properties

security-peer-auth-init

security-peer-authenticator

security-client-authenticator

 

 

security-keystorepath

security-keystorepass

security-alias

 

 

security-publickey-filepath

security-publickey-pass

 

 

security-crypto-provider

security-crypto-provider-class

security-crypto-max-ciphers

security-crypto-max-keygens

 

 

 

and now,  I have two properties files (agent.properties and gfsecurity.properties)

 

 

In the gfsecurity.properties i put all the ## security values ( security-peer-auth-init , ....)

 

 

 

But the problems is that the agent is running but in the log show that

 

 

 

Caused by: java.lang.IllegalArgumentException: Could not set "security-peer-auth-init" to "com.gire.rp.server.infrastructure.security.ServerAuthInit.create" because "mcast-port[10334]" must be 0 when security is enabled.

 

 

 

Why this happen?

RegionDestroyedException while creating region

$
0
0

Hi,

 

I noticed RegionDestroyedException while creating a region in our application. This is the first time we are noticing this and after restarting application it worked as expected. In what cases does create method on RegionFactory throw this exception? Also, if this happens in future, how do we handle such cases? Should the code catch this exception and retry creating the region? Please suggest.

 

Region is created this way -

regionFactory.create("regionName")

 

Thanks

AsyncEventListener

$
0
0

I have AsyncEventQueues with custom AsyncEventListeners installed in my application. I noticed on application shutdown that the AsyncEventListener's close method is being called twice even though I am only configuring one listener in the region. It seems that there are multiple AsyncEventListeners being instantiated. Is this expected behavior?

Sprind data Gemfire Exception : loadCaches must not return an empty Collection

$
0
0

I would like to setup spring method caching  backed up by Gemfire, the spring bean cache configuration is read as bellow :

 

<gfe:cache id="simple" properties-ref="props" />
<util:properties id="props" location="classpath:cache.properties"/>
<gfe:transaction-manager cache-ref="simple" />

  <gfe:replicated-region id="gestionAccesCacheModel" cache-ref="simple">
</gfe:replicated-region>

 

<gfe:replicated-region id="gestionReferentielCacheModel" cache-ref="simple">
</gfe:replicated-region>    
<cache:annotation-driven />
  <bean id="cacheManager" class="org.springframework.data.gemfire.support.GemfireCacheManager" p:cache-ref="simple"/>

 

 

cache.properties  :

 

locators=localhost[55221]

 

RecupererDroitsSMImpl.java :

@Servicepublicclass RecupererDroitsSMImpl implements RecupererDroitsSM{

  @Cacheable(value="gestionAccesCacheModel")

   public List<HabilitationCollege> recupererDroitsByUsers(String utilisateur,String abonnement) {

      ....................................

   }

 

}

 

pom.xml :

<dependency>

    <groupId>org.springframework.data</groupId>

    <artifactId>spring-data-gemfire</artifactId>

   <scope>system</scope>

   <systemPath>D:/repository/spring-data-gemfire-1.3.0.RELEASE.jar</systemPath>

</dependency>

<dependency>

  <groupId>com.gemstone.gemfire</groupId>

    <artifactId>gemfire</artifactId>

  <version>6.6.2</version>

</dependency>

 

I get the following exception:

 

com/socgen/cmc/service/metier/international/habilitation/RecupererDroitsSMImpl.class]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.cache.interceptor.CacheInterceptor#0': Cannot resolve reference to bean 'cacheManager' while setting bean property 'cacheManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cacheManager' defined in ServletContext resource [/WEB-INF/conf/commun/cmc-commun-cache.xml]: Invocation of init method failed; nested exception is java.lang.IllegalArgumentException: loadCaches must not return an empty Collection


Any Help should be appreciated


thanks

 


Test for local primary bucket

$
0
0

How can I find whether the primary bucket is hosted locally for any given entry in a partitioned region? I am working with large objects in my partitioned region and I want to optimize operations on those objects by only working on locally hosted copies.

Gemfire Region Colocation

$
0
0

I would like to know a little bit more about how colocation works. Documentation in this area does not seem to be detailed at least does not seems so clear to me.

 

Lets say I have 3 server nodes and 3 partitioned regions(Customer, Orders, Shipments)  with total buckets=13 and I would like to define that customers are coolocated by domestic and global.

Considering that both groups are big, is that mean that all domestic will go to one bucket and all global to another bucket and rest will be empty (or collocation will relate to servers and then one of the server will be empty(lets assume that there is no copies))?

If that is true I will understand that it is better to define colocation routing object rather granular? Or Gemfire is rather smart enough to still divide data to different buckets and different servers and just try to don't put global and domestic in the same bucket?

Another question if I do colocation does that mean that all keys have to be the same? Do I have to use PartitionResolver key or String is enough?

 

What Im looking for is to Gemfire try to keep Customers grouped by (Domestic or Global) within region using PartitionResolver and getRoutingObject as Domestic or Global, but I don't want it to just use one bucket for it, I would like them to still be spread and just to minimize number of buckets used.... and then if previous is possible have Orders and Shipments colocated with them by CustomerId not by Domestic or Global (different PartitionResolver)

 

I'm afraid that exact scenario is not possible, but at least I want to understand how grouping (through PartitionResolver) and colocation works - is it a hard constraint that all related data has to be in one bucket.

 

And if my fallback scenario is possible which is just colocation by CustomerId using just key and not using PartitionResolver....

 

Thanks and I will appreciate any input
Jack

destroy() vs. remove()

$
0
0

What is the difference between Region.destroy(Object key) and Region.remove(Object key)? They look identical in the API docs.

Inserting thousands of objects and Cache Listener

$
0
0

If I have partitioned region and I did putAll several thousands of objects. Does it mean that Cache Listener will fire several thousands times (split to number of server nodes)?

I assume it will. Is there any workaround, or I should be careful with CacheListener ...

 

I'm trying to mimick some type of batch processing, so thousands of raw records are inserted into partition region (to split the workload) and then each nodes will read raw data, do transformation processing and insert processed data back. Anybody had similar scenario - my idea is to instead of inserting individual raw records grouping them in several chunks...

 

Jack

GMS shun

$
0
0

Every few days one of my gemfire servers get shunned and needs to be restarted. The gemfire log only shows this message and not much else. From the logs and thread dumps, my application looks perfectly fine. Can I disable GMS.shun? It has been removed from the jgroups project. Or can it indicate some other bug or configuration problem in my application?

 

[severe 2013/05/23 09:58:39.981 EST GemFire_ServerSide_Production <CloserThread> tid=0xe238b] Membership service failure: Channel closed: com.gemstone.gemfire.ForcedDisconnectException: This member has been forced out of the distributed system.  Please consult GemFire logs to find the reason. (GMS shun)

 

This is what's in the locator log:

 

[info 2013/05/23 09:58:10.644 EST  <VERIFY_SUSPECT.TimerThread> tid=0x34a2] No suspect verification response received from gemoc011(30843)<v293>:21251 in 5001 milliseconds: declaring it dead

[info 2013/05/23 09:58:10.798 EST  <UDP ucast receiver> tid=0x1b] Membership: received new view  [gemoc021(32727)<v1>:40020|308] [gemoc021(32727)<v1>:40020/2957, gemoc011(17072)<v3>:40061/39934, gemoc021(29430)<v30>:65465/31761, gemoc011(16558)<v31>:17742/41610, gemoc021(20340)<v292>:18368/59699] crashed mbrs: [gemoc011(30843)<v293>:21251/38294]

[info 2013/05/23 09:58:10.800 EST  <View Message Processor> tid=0x39] Member at gemoc011(30843)<v293>:21251/38294 unexpectedly left the distributed cache: departed JGroups view

Viewing all 990 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>