diff --git a/docs/extensions/custom-commands.md b/docs/extensions/custom-commands.md index edd1f912..59f5ee4f 100644 --- a/docs/extensions/custom-commands.md +++ b/docs/extensions/custom-commands.md @@ -38,9 +38,20 @@ Cachex.start_link(:my_cache, [ ]) ``` -Each command receives a cache value to operate on and return. A command flagged as `:read` (such as `:last` above) will simply transforms the cache value before the final command return occurs, allowing the cache to mask complicated logic from the calling module. Commands flagged as `:write` are a little more complicated, but still fairly easy to grasp. These commands *must* return a 2-element tuple, with the return value in index `0` and the new cache value in index `1`. +Each command receives a cache value to operate on an return. A command flagged as `:read` will simply transform the cache value before it's returned the user, allowing a developer to mask complicated logic directly in the cache itself rather than the calling module. This is suitable for storing specific structures in your cache and allowing "direct" operations on them (i.e. lists, maps, etc.). -It is important to note that custom cache commands _will_ receive `nil` values in the cache of a missing cache key. If you're using a `:write` command and receive a misisng value, your returned modified value will only be written back to the cache if it's no longer `nil`. This allows the developer to implement logic such as lazy loading, but also escape the situation where you're cornered into writing to the cache. +Commands flagged as `:write` as a little more complicated, but still fairly easy to grasp. These commands *must* always resolve to a 2 element tuple, with the value to return from the call at index `0` and the new cache value in index `1`. You can either return a 2 element tuple as-is, or it can be contained in the `:commit` interfaces of Cachex: + +```elixir +lpop = fn + ([ head | tail ]) -> + {:commit, {head, tail}} + (_) -> + {:ignore, nil} +end +``` + +This provides uniform handling across other cache interfaces, and makes it possible to implement things like lazy loading while providing an escape for the developer in cases where writing should be skipped. This is not perfect, so behaviour here may change in future as new options become available. ## Invoking Commands diff --git a/docs/general/committing-changes.md b/docs/general/committing-changes.md new file mode 100644 index 00000000..8cf50a95 --- /dev/null +++ b/docs/general/committing-changes.md @@ -0,0 +1,88 @@ +# Batching Actions + +It's sometimes the case that you need to execute several cache actions in a row. Although you can do this in the normal, this is actually somewhat inefficient as each call has to do various management (such as looking up cache states). For this reason Cachex offers several mechanisms for making multiple calls in sequence. + +## Submitting Batches + +The simplest way to make several cache calls together is `Cachex.execute/3`. This API allows the caller to provide a function which will be provided with a pre-validated cache state which can be used (instead of the cache name) to execute cache actions. This will skip all of the cache management overhead you'd see typically: + +```elixir +# standard way to execute several actions +r1 = Cachex.get!(:my_cache, "key1") +r2 = Cachex.get!(:my_cache, "key2") +r3 = Cachex.get!(:my_cache, "key3") + +# using Cachex.execute/3 to optimize the batch of calls +{r1, r2, r3} = + Cachex.execute!(:my_cache, fn cache -> + # execute our batch of actions + r1 = Cachex.get!(cache, "key1") + r2 = Cachex.get!(cache, "key2") + r3 = Cachex.get!(cache, "key3") + + # pass back all results as a tuple + {r1, r2, r3} + end) +``` + +Although this syntax might look a little more complicated at a glance, it should be fairly straightforward to get used to. The small change in approach here gives a fairly large boost to cache throughput. To compare the two examples above, we can use a tool like [Benchee](https://github.com/PragTob/benchee) for a rough comparison: + +``` +Name ips average deviation median 99th % +grouped 1.72 M 580.68 ns ±3649.68% 500 ns 750 ns +individually 1.31 M 764.02 ns ±2335.25% 625 ns 958 ns +``` + +We can clearly see the time saving when using the batched approach, even if there is a large deviation in the numbers above. Somewhat intuitively, the time saving scales to the number of actions you're executing in your batch, even if it is unlikely that anyone is doing more than a few calls at once. + +It's important to note that even though you're executing a batch of actions, other processes can access and modify keys at any time during your `Cachex.execute/3` call. These calls still occur your calling process; they're not sent through any kind of arbitration process. To demonstrate this, here's a quick example: + +```elixir +# start our execution block +Cachex.execute!(:my_cache, fn cache -> + # set a base value in the cache + Cachex.put!(cache, "key", "value") + + # we're paused but other changes can happen + :timer.sleep(5000) + + # this may have have been set elsewhere + Cachex.get!(cache, "key") +end) +``` + +As we wait 5 seconds before reading the value back, the value may have been modified or even removed by other processes using the cache (such as TTL cleanup or other places in your application). If you want to guarantee that nothing is modified between your interactions, you should consider a transactional block instead. + +## Transactional Batches + +A transactional block will guarantee that your actions against a cache key will happen with zero interaction from other processes. Transactions look almost exactly the same as `Cachex.execute/3`, except that they require a list of keys to lock for the duration of their execution. + +The entry point to a Cachex transaction is (unsurprisingly) `Cachex.transaction/4`. If we take the example from the previous section, let's look at how we can guarantee consistency between our cache calls: + +```elixir +# start our execution block +Cachex.transaction!(:my_cache, ["key"], fn cache -> + # set a base value in the cache + Cachex.put!(cache, "key", "value") + + # we're paused but other changes will not happen + :timer.sleep(5000) + + # this will be guaranteed to return "value" + Cachex.get!(cache, "key") +end) +``` + +It's critical to provide the keys you wish to lock when calling `Cachex.transaction/4`, as any keys not specified will still be available to be written by other processes during your function's execution. If you're making a simple cache call, the transactional flow will only be taken if there is a simultaneous transaction happening against the same key. This enables caches to stay lightweight whilst allowing for these batches when they really matter. + +Another pattern which may prove useful is providing an empty list of keys, which will guarantee that your transaction runs at a time when no keys in the cache are currently locked. For example, the following code will guarantee that no keys are locked when purging expired records: + +```elixir +Cachex.transaction!(:my_cache, [], fn cache -> + Cachex.purge!(cache) +end) +``` + +Transactional flows are only enabled the first time you call `Cachex.transaction/4`, so you shouldn't see any peformance penalty in the case you're not actively using transactions. This also has the benefit of not requiring transaction support to be configured inside the cache options, as was the case in earlier versions of Cachex. + +The last major difference between `Cachex.execute/3` and `Cachex.transaction/4` is where they run; transactions are executed inside a secondary worker process, so each transaction will run only after the previous has completed. As such there is a minor performance overhead when working with transactions, so use them only when you need to. diff --git a/docs/management/limiting-caches.md b/docs/management/limiting-caches.md index 170109a6..078aa3a1 100644 --- a/docs/management/limiting-caches.md +++ b/docs/management/limiting-caches.md @@ -1,6 +1,6 @@ # Limiting Caches -Cache limits are restrictions on a cache to ensure that it stays within given bounds. The limits currently shipped inside Cachex are based around the number of entries inside a cache, but there are plans to add new policies in future (for example basing the limits on memory spaces). You even even write your own! +Cache limits are restrictions on a cache to ensure that it stays within given bounds. The limits currently shipped inside Cachex are based around the number of entries inside a cache, but there are plans to add new policies in future (for example basing the limits on memory spaces). You can even write your own! ## Manual Pruning @@ -14,19 +14,21 @@ Cachex.start(:my_cache) # insert 100 keys for i <- 1..100 do - Cachex.put!(:my_cache, i, i) + Cachex.put(:my_cache, i, i) end # guarantee we have 100 keys in the cache -{ :ok, 100 } = Cachex.size(:my_cache) +100 = Cachex.size(:my_cache) # trigger a pruning down to 50 keys only -{ :ok, true } = Cachex.prune(:my_cache, 50, reclaim: 0) +50 = Cachex.prune(:my_cache, 50, reclaim: 0) # verify that we're down to 50 keys -{ :ok, 50 } = Cachex.size(:my_cache) +50 = Cachex.size(:my_cache) ``` +As part of pruning, `Cachex.prune/3` will trigger a call to `Cachex.purge/2` to first remove expired entries before cutting potentially unnecessary entries. While the return value of `Cachex.prune/3` represents how many cache entries were *pruned*, it should be noted that the number of expired entries is not included in this value. + The `:reclaim` option can be used to reduce thrashing, by evicting an additional number of entries. In the case above the next write would cause the cache to once again need pruning, and then so on. The `:reclaim` option accepts a percentage (as a decimal) of extra keys to evict, which gives us a buffer between pruning of a cache. To demonstrate this we can run the same example as above, except using a `:reclaim` of `0.1` (the default). This time we'll be left with 45 keys instead of 50, as we reclaimed an extra 10% of the table (`50 * 0.1 = 5`): @@ -37,17 +39,17 @@ Cachex.start(:my_cache) # insert 100 keys for i <- 1..100 do - Cachex.put!(:my_cache, i, i) + Cachex.put(:my_cache, i, i) end # guarantee we have 100 keys in the cache -{ :ok, 100 } = Cachex.size(:my_cache) +100 = Cachex.size(:my_cache) # trigger a pruning down to 50 keys, reclaiming 10% -{ :ok, true } = Cachex.prune(:my_cache, 50, reclaim: 0.1) +55 = Cachex.prune(:my_cache, 50, reclaim: 0.1) # verify that we're down to 45 keys -{ :ok, 45 } = Cachex.size(:my_cache) +45 = Cachex.size(:my_cache) ``` It is almost never a good idea to set `reclaim: 0` unless you have very specific use cases, so if you don't it's recommended to leave `:reclaim` at the default value - it was only used above for example purposes. diff --git a/lib/cachex.ex b/lib/cachex.ex index 3e28b524..1fdd4741 100644 --- a/lib/cachex.ex +++ b/lib/cachex.ex @@ -55,48 +55,29 @@ defmodule Cachex do import Kernel, except: [inspect: 2] # the type aliases for a cache type - @type t :: atom | Cachex.Spec.cache() + @type t :: atom() | Cachex.Spec.cache() # custom status type + @type error :: {:error, atom()} @type status :: :ok | :error # generate unsafe definitions @unsafe [ - clear: [1, 2], decr: [2, 3, 4], - del: [2, 3], - empty?: [1, 2], execute: [2, 3], - exists?: [2, 3], - expire: [3, 4], - expire_at: [3, 4], export: [1, 2], fetch: [3, 4], - get: [2, 3], - get_and_update: [3, 4], + get_and_update: [3, 4, 5], import: [2, 3], incr: [2, 3, 4], inspect: [2, 3], - invoke: [3, 4], - keys: [1, 2], - persist: [2, 3], - prune: [2, 3], - purge: [1, 2], - put: [3, 4], + invoke: [3, 4, 5], put_many: [2, 3], - refresh: [2, 3], - reset: [1, 2], restore: [2, 3], save: [2, 3], - size: [1, 2], stats: [1, 2], stream: [1, 2, 3], - take: [2, 3], - touch: [2, 3], - transaction: [3, 4], - ttl: [2, 3], - update: [3, 4], - warm: [1, 2] + transaction: [3, 4] ] ############## @@ -327,8 +308,7 @@ defmodule Cachex do # # This will start all cache services required using the `Cachex.Services` # module and attach them under a Supervisor instance backing the cache. - @spec init(cache :: Cachex.t()) :: - {:ok, {Supervisor.sup_flags(), [Supervisor.child_spec()]}} + @spec init(cache :: Cachex.t()) :: {:ok, {Supervisor.sup_flags(), [Supervisor.child_spec()]}} def init(cache() = cache) do cache |> Services.cache_spec() @@ -347,16 +327,16 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key", "value") iex> Cachex.get(:my_cache, "key") iex> Cachex.size(:my_cache) - { :ok, 1 } + 1 iex> Cachex.clear(:my_cache) - { :ok, 1 } + 1 iex> Cachex.size(:my_cache) - { :ok, 0 } + 0 """ - @spec clear(Cachex.t(), Keyword.t()) :: {status, integer} + @spec clear(Cachex.t(), Keyword.t()) :: integer() def clear(cache, options \\ []) when is_list(options), do: Router.route(cache, {:clear, [options]}) @@ -376,19 +356,18 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "my_key", 10) iex> Cachex.decr(:my_cache, "my_key") - { :ok, 9 } + :ok, 9 iex> Cachex.put(:my_cache, "my_new_key", 10) iex> Cachex.decr(:my_cache, "my_new_key", 5) - { :ok, 5 } + :ok, 5 iex> Cachex.decr(:my_cache, "missing_key", 5, default: 2) - { :ok, -3 } + -3 """ - @spec decr(Cachex.t(), any, integer, Keyword.t()) :: {status, integer} - def decr(cache, key, amount \\ 1, options \\ []) - when is_integer(amount) and is_list(options) do + @spec decr(Cachex.t(), any(), integer(), Keyword.t()) :: integer() | Cachex.error() + def decr(cache, key, amount \\ 1, options \\ []) when is_integer(amount) and is_list(options) do via_opt = via({:decr, [key, amount, options]}, options) incr(cache, key, amount * -1, via_opt) end @@ -396,8 +375,8 @@ defmodule Cachex do @doc """ Removes an entry from a cache. - This will return `{ :ok, true }` regardless of whether a key has been removed - or not. The `true` value can be thought of as "is key no longer present?". + This will return `true` regardless of whether a key has been removed or + not. The `true` value can be thought of as "is key no longer present?". ## Examples @@ -406,13 +385,13 @@ defmodule Cachex do { :ok, "value" } iex> Cachex.del(:my_cache, "key") - { :ok, true } + :ok iex> Cachex.get(:my_cache, "key") - { :ok, nil } + nil """ - @spec del(Cachex.t(), any, Keyword.t()) :: {status, boolean} + @spec del(Cachex.t(), any(), Keyword.t()) :: :ok def del(cache, key, options \\ []) when is_list(options), do: Router.route(cache, {:del, [key, options]}) @@ -427,14 +406,14 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key1", "value1") iex> Cachex.empty?(:my_cache) - { :ok, false } + false iex> Cachex.clear(:my_cache) iex> Cachex.empty?(:my_cache) - { :ok, true } + true """ - @spec empty?(Cachex.t(), Keyword.t()) :: {status, boolean} + @spec empty?(Cachex.t(), Keyword.t()) :: boolean() def empty?(cache, options \\ []) when is_list(options), do: Router.route(cache, {:empty?, [options]}) @@ -457,18 +436,18 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key1", "value1") iex> Cachex.put(:my_cache, "key2", "value2") iex> Cachex.execute(:my_cache, fn(worker) -> - ...> val1 = Cachex.get!(worker, "key1") - ...> val2 = Cachex.get!(worker, "key2") + ...> val1 = Cachex.get(worker, "key1") + ...> val2 = Cachex.get(worker, "key2") ...> [val1, val2] ...> end) - { :ok, [ "value1", "value2" ] } + [ "value1", "value2" ] """ - @spec execute(Cachex.t(), function, Keyword.t()) :: {status, any} + @spec execute(Cachex.t(), function(), Keyword.t()) :: any() def execute(cache, operation, options \\ []) when is_function(operation, 1) and is_list(options) do Overseer.with(cache, fn cache -> - {:ok, operation.(cache)} + operation.(cache) end) end @@ -482,13 +461,13 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key", "value") iex> Cachex.exists?(:my_cache, "key") - { :ok, true } + true iex> Cachex.exists?(:my_cache, "missing_key") - { :ok, false } + false """ - @spec exists?(Cachex.t(), any, Keyword.t()) :: {status, boolean} + @spec exists?(Cachex.t(), any(), Keyword.t()) :: boolean() def exists?(cache, key, options \\ []) when is_list(options), do: Router.route(cache, {:exists?, [key, options]}) @@ -505,13 +484,13 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key", "value") iex> Cachex.expire(:my_cache, "key", :timer.seconds(5)) - { :ok, true } + true iex> Cachex.expire(:my_cache, "missing_key", :timer.seconds(5)) - { :ok, false } + false """ - @spec expire(Cachex.t(), any, number | nil, Keyword.t()) :: {status, boolean} + @spec expire(Cachex.t(), any(), number() | nil, Keyword.t()) :: boolean() def expire(cache, key, expiration, options \\ []) when (is_nil(expiration) or is_number(expiration)) and is_list(options), do: Router.route(cache, {:expire, [key, expiration, options]}) @@ -527,13 +506,13 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key", "value") iex> Cachex.expire_at(:my_cache, "key", 1455728085502) - { :ok, true } + true iex> Cachex.expire_at(:my_cache, "missing_key", 1455728085502) - { :ok, false } + false """ - @spec expire_at(Cachex.t(), any, number, Keyword.t()) :: {status, boolean} + @spec expire_at(Cachex.t(), any(), number(), Keyword.t()) :: boolean() def expire_at(cache, key, timestamp, options \\ []) when is_number(timestamp) and is_list(options) do via_opts = via({:expire_at, [key, timestamp, options]}, options) @@ -554,10 +533,10 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key", "value") iex> Cachex.export(:my_cache) - { :ok, [ { :entry, "key", 1538714590095, nil, "value" } ] } + [ { :entry, "key", 1538714590095, nil, "value" } ] """ - @spec export(Cachex.t(), Keyword.t()) :: {status, [Cachex.Spec.entry()]} + @spec export(Cachex.t(), Keyword.t()) :: [Cachex.Spec.entry()] | Cachex.error() def export(cache, options \\ []) when is_list(options), do: Router.route(cache, {:export, [options]}) @@ -595,7 +574,7 @@ defmodule Cachex do iex> Cachex.fetch(:my_cache, "key", fn(key) -> ...> { :commit, String.reverse(key) } ...> end) - { :ok, "value" } + "value" iex> Cachex.fetch(:my_cache, "missing_key", fn(key) -> ...> { :ignore, String.reverse(key) } @@ -613,8 +592,8 @@ defmodule Cachex do { :commit, "seripxe_yek_gnissim" } """ - @spec fetch(Cachex.t(), any, function(), Keyword.t()) :: - {status | :commit | :ignore, any} + @spec fetch(Cachex.t(), any(), function(), Keyword.t()) :: + any() | {:commit, any()} | {:ignore, any()} | Cachex.error() def fetch(cache, key, fallback, options \\ []) when is_function(fallback) and is_list(options), do: Router.route(cache, {:fetch, [key, fallback, options]}) @@ -626,15 +605,18 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key", "value") iex> Cachex.get(:my_cache, "key") - { :ok, "value" } + "value" iex> Cachex.get(:my_cache, "missing_key") - { :ok, nil } + nil + + iex> Cachex.get(:my_cache, "missing_key", "default") + "default" """ - @spec get(Cachex.t(), any, Keyword.t()) :: {atom, any} - def get(cache, key, options \\ []) when is_list(options), - do: Router.route(cache, {:get, [key, options]}) + @spec get(Cachex.t(), any(), any(), Keyword.t()) :: any() | nil + def get(cache, key, default \\ nil, options \\ []) when is_list(options), + do: Router.route(cache, {:get, [key, default, options]}) @doc """ Retrieves and updates an entry in a cache. @@ -661,12 +643,15 @@ defmodule Cachex do ...> end) { :ignore, nil } + iex> Cachex.get_and_update(:my_cache, "missing_key", &([1|&1]), []) + { :commit, [1] } + """ - @spec get_and_update(Cachex.t(), any, function, Keyword.t()) :: - {:commit | :ignore, any} - def get_and_update(cache, key, updater, options \\ []) + @spec get_and_update(Cachex.t(), any(), function(), any(), Keyword.t()) :: + {:commit | :ignore, any()} + def get_and_update(cache, key, updater, default \\ nil, options \\ []) when is_function(updater, 1) and is_list(options), - do: Router.route(cache, {:get_and_update, [key, updater, options]}) + do: Router.route(cache, {:get_and_update, [key, updater, default, options]}) @doc """ Retrieves a list of all entry keys from a cache. @@ -679,14 +664,14 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key2", "value2") iex> Cachex.put(:my_cache, "key3", "value3") iex> Cachex.keys(:my_cache) - { :ok, [ "key2", "key1", "key3" ] } + [ "key2", "key1", "key3" ] iex> Cachex.clear(:my_cache) iex> Cachex.keys(:my_cache) - { :ok, [] } + [] """ - @spec keys(Cachex.t(), Keyword.t()) :: {status, [any]} + @spec keys(Cachex.t(), Keyword.t()) :: [any()] def keys(cache, options \\ []) when is_list(options), do: Router.route(cache, {:keys, [options]}) @@ -700,10 +685,10 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key", "value") iex> Cachex.import(:my_cache, [ { :entry, "key", "value", 1538714590095, nil } ]) - { :ok, 1 } + 1 """ - @spec import(Cachex.t(), Enumerable.t(), Keyword.t()) :: {status, integer} + @spec import(Cachex.t(), Enumerable.t(), Keyword.t()) :: integer() | Cachex.error() def import(cache, entries, options \\ []) when is_list(options), do: Router.route(cache, {:import, [entries, options]}) @@ -723,20 +708,19 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "my_key", 10) iex> Cachex.incr(:my_cache, "my_key") - { :ok, 11 } + 11 iex> Cachex.put(:my_cache, "my_new_key", 10) iex> Cachex.incr(:my_cache, "my_new_key", 5) - { :ok, 15 } + 15 iex> Cachex.incr(:my_cache, "missing_key", 5, default: 2) - { :ok, 7 } + 7 """ - @spec incr(Cachex.t(), any, integer, Keyword.t()) :: {status, integer} - def incr(cache, key, amount \\ 1, options \\ []) - when is_integer(amount) and is_list(options), - do: Router.route(cache, {:incr, [key, amount, options]}) + @spec incr(Cachex.t(), any(), integer(), Keyword.t()) :: integer() | Cachex.error() + def incr(cache, key, amount \\ 1, options \\ []) when is_integer(amount) and is_list(options), + do: Router.route(cache, {:incr, [key, amount, options]}) @doc """ Inspects various aspects of a cache. @@ -794,35 +778,33 @@ defmodule Cachex do ## Examples iex> Cachex.inspect(:my_cache, :cache) - {:ok, - {:cache, :my_cache, %{}, false, {:expiration, nil, 3000, true}, - {:hooks, [], [{:hook, Cachex.Stats, nil, #PID<0.986.0>}]}, - [{:hook, Cachex.Limit.Scheduled, {500, [], []}, #PID<0.985.0>}], nil, false, - {:router, [], Cachex.Router.Local, nil}, false, []}} + {:cache, :test, %{}, false, {:expiration, nil, 3000, true}, + {:hooks, [], [], []}, nil, false, {:router, [], Cachex.Router.Local, nil}, + false, []} iex> Cachex.inspect(:my_cache, { :entry, "my_key" } ) - { :ok, { :entry, "my_key", 1475476615662, 1, "my_value" } } + { :entry, "my_key", 1475476615662, 1, "my_value" } iex> Cachex.inspect(:my_cache, { :expired, :count }) - { :ok, 0 } + 0 iex> Cachex.inspect(:my_cache, { :expired, :keys }) - { :ok, [ ] } + [ ] iex> Cachex.inspect(:my_cache, { :janitor, :last }) - { :ok, %{ count: 0, duration: 57, started: 1475476530925 } } + %{ count: 0, duration: 57, started: 1475476530925 } iex> Cachex.inspect(:my_cache, { :memory, :binary }) - { :ok, "10.38 KiB" } + "10.38 KiB" iex> Cachex.inspect(:my_cache, { :memory, :bytes }) - { :ok, 10624 } + 10624 iex> Cachex.inspect(:my_cache, { :memory, :words }) - { :ok, 1328 } + 1328 """ - @spec inspect(Cachex.t(), atom | tuple, Keyword.t()) :: {status, any} + @spec inspect(Cachex.t(), atom() | tuple(), Keyword.t()) :: any() | Cachex.error() def inspect(cache, option, options \\ []) when is_list(options), do: Router.route(cache, {:inspect, [option, options]}) @@ -845,12 +827,15 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "my_list", [ 1, 2, 3 ]) iex> Cachex.invoke(:my_cache, :last, "my_list") - { :ok, 3 } + 3 + + iex> Cachex.invoke(:my_cache, :last, "missing", [1]) + 1 """ - @spec invoke(Cachex.t(), atom, any, Keyword.t()) :: any - def invoke(cache, cmd, key, options \\ []) when is_list(options), - do: Router.route(cache, {:invoke, [cmd, key, options]}) + @spec invoke(Cachex.t(), atom(), any(), Keyword.t()) :: any() + def invoke(cache, cmd, key, default \\ nil, options \\ []) when is_list(options), + do: Router.route(cache, {:invoke, [cmd, key, default, options]}) @doc """ Removes an expiration time from an entry in a cache. @@ -859,13 +844,13 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key", "value", expiration: 1000) iex> Cachex.persist(:my_cache, "key") - { :ok, true } + true iex> Cachex.persist(:my_cache, "missing_key") - { :ok, false } + false """ - @spec persist(Cachex.t(), any, Keyword.t()) :: {status, boolean} + @spec persist(Cachex.t(), any(), Keyword.t()) :: boolean() def persist(cache, key, options \\ []) when is_list(options), do: expire(cache, key, nil, via({:persist, [key, options]}, options)) @@ -875,6 +860,9 @@ defmodule Cachex do Pruning is done via a Least Recently Written (LRW) approach, determined by the modification time inside each cache record to avoid storing additional state. + The return value of this function represents the number of entries removed in + order to trim the cache to the required bounds. + For full details on this feature, please see the section of the documentation related to limitation of caches. @@ -895,25 +883,24 @@ defmodule Cachex do ## Examples iex> Cachex.put(:my_cache, "key1", "value1") - { :ok, true } + :ok iex> :timer.sleep(1) :ok iex> Cachex.put(:my_cache, "key2", "value2") - { :ok, true } + :ok iex> Cachex.prune(:my_cache, 1, reclaim: 0) - { :ok, true } + 1 iex> Cachex.keys(:my_cache) - { :ok, [ "key2"] } + [ "key2"] """ - @spec prune(Cachex.t(), integer, Keyword.t()) :: {status, boolean} - def prune(cache, size, options \\ []) - when is_positive_integer(size) and is_list(options), - do: Router.route(cache, {:prune, [size, options]}) + @spec prune(Cachex.t(), integer, Keyword.t()) :: integer() + def prune(cache, size, options \\ []) when is_positive_integer(size) and is_list(options), + do: Router.route(cache, {:prune, [size, options]}) @doc """ Triggers a cleanup of all expired entries in a cache. @@ -926,10 +913,10 @@ defmodule Cachex do ## Examples iex> Cachex.purge(:my_cache) - { :ok, 15 } + 15 """ - @spec purge(Cachex.t(), Keyword.t()) :: {status, number} + @spec purge(Cachex.t(), Keyword.t()) :: integer() def purge(cache, options \\ []) when is_list(options), do: Router.route(cache, {:purge, [options]}) @@ -949,14 +936,14 @@ defmodule Cachex do ## Examples iex> Cachex.put(:my_cache, "key", "value") - { :ok, true } + :ok iex> Cachex.put(:my_cache, "key", "value", expire: :timer.seconds(5)) iex> Cachex.ttl(:my_cache, "key") - { :ok, 5000 } + 5000 """ - @spec put(Cachex.t(), any, any, Keyword.t()) :: {status, any} + @spec put(Cachex.t(), any(), any(), Keyword.t()) :: :ok def put(cache, key, value, options \\ []) when is_list(options), do: Router.route(cache, {:put, [key, value, options]}) @@ -978,17 +965,16 @@ defmodule Cachex do ## Examples iex> Cachex.put_many(:my_cache, [ { "key", "value" } ]) - { :ok, true } + :ok iex> Cachex.put_many(:my_cache, [ { "key", "value" } ], expire: :timer.seconds(5)) iex> Cachex.ttl(:my_cache, "key") - { :ok, 5000 } + 5000 """ - @spec put_many(Cachex.t(), [{any, any}], Keyword.t()) :: {status, any} - def put_many(cache, pairs, options \\ []) - when is_list(pairs) and is_list(options), - do: Router.route(cache, {:put_many, [pairs, options]}) + @spec put_many(Cachex.t(), [{any(), any()}], Keyword.t()) :: :ok | Cachex.error() + def put_many(cache, pairs, options \\ []) when is_list(pairs) and is_list(options), + do: Router.route(cache, {:put_many, [pairs, options]}) @doc """ Refreshes an expiration for an entry in a cache. @@ -1003,17 +989,17 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "my_key", "my_value", expire: :timer.seconds(5)) iex> Process.sleep(4) iex> Cachex.ttl(:my_cache, "my_key") - { :ok, 1000 } + 1000 iex> Cachex.refresh(:my_cache, "my_key") iex> Cachex.ttl(:my_cache, "my_key") - { :ok, 5000 } + 5000 iex> Cachex.refresh(:my_cache, "missing_key") - { :ok, false } + false """ - @spec refresh(Cachex.t(), any, Keyword.t()) :: {status, boolean} + @spec refresh(Cachex.t(), any(), Keyword.t()) :: boolean() def refresh(cache, key, options \\ []) when is_list(options), do: Router.route(cache, {:refresh, [key, options]}) @@ -1040,19 +1026,19 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "my_key", "my_value") iex> Cachex.reset(:my_cache) iex> Cachex.size(:my_cache) - { :ok, 0 } + 0 iex> Cachex.reset(:my_cache, [ only: :hooks ]) - { :ok, true } + :ok iex> Cachex.reset(:my_cache, [ only: :hooks, hooks: [ MyHook ] ]) - { :ok, true } + :ok iex> Cachex.reset(:my_cache, [ only: :cache ]) - { :ok, true } + :ok """ - @spec reset(Cachex.t(), Keyword.t()) :: {status, true} + @spec reset(Cachex.t(), Keyword.t()) :: :ok def reset(cache, options \\ []) when is_list(options), do: Router.route(cache, {:reset, [options]}) @@ -1078,26 +1064,25 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "my_key", 10) iex> Cachex.save(:my_cache, "/tmp/my_backup") - { :ok, true } + true iex> Cachex.size(:my_cache) - { :ok, 1 } + 1 iex> Cachex.clear(:my_cache) iex> Cachex.size(:my_cache) - { :ok, 0 } + 0 iex> Cachex.restore(:my_cache, "/tmp/my_backup") - { :ok, 1 } + 1 iex> Cachex.size(:my_cache) - { :ok, 1 } + 1 """ - @spec restore(Cachex.t(), binary, Keyword.t()) :: {status, integer} - def restore(cache, path, options \\ []) - when is_binary(path) and is_list(options), - do: Router.route(cache, {:restore, [path, options]}) + @spec restore(Cachex.t(), binary(), Keyword.t()) :: integer() | Cachex.error() + def restore(cache, path, options \\ []) when is_binary(path) and is_list(options), + do: Router.route(cache, {:restore, [path, options]}) @doc """ Serializes a cache to a location on a filesystem. @@ -1119,13 +1104,12 @@ defmodule Cachex do ## Examples iex> Cachex.save(:my_cache, "/tmp/my_default_backup") - { :ok, true } + :ok """ - @spec save(Cachex.t(), binary, Keyword.t()) :: {status, any} - def save(cache, path, options \\ []) - when is_binary(path) and is_list(options), - do: Router.route(cache, {:save, [path, options]}) + @spec save(Cachex.t(), binary(), Keyword.t()) :: :ok | Cachex.error() + def save(cache, path, options \\ []) when is_binary(path) and is_list(options), + do: Router.route(cache, {:save, [path, options]}) @doc """ Retrieves the total size of a cache. @@ -1148,13 +1132,13 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key2", "value2") iex> Cachex.put(:my_cache, "key3", "value3", expire: 1) iex> Cachex.size(:my_cache) - { :ok, 3 } + 3 iex> Cachex.size(:my_cache, expired: false) - { :ok, 2 } + 2 """ - @spec size(Cachex.t(), Keyword.t()) :: {status, number} + @spec size(Cachex.t(), Keyword.t()) :: integer() def size(cache, options \\ []) when is_list(options), do: Router.route(cache, {:size, [options]}) @@ -1167,13 +1151,13 @@ defmodule Cachex do ## Examples iex> Cachex.stats(:my_cache) - {:ok, %{meta: %{creation_date: 1518984857331}}} + %{meta: %{creation_date: 1518984857331}} iex> Cachex.stats(:cache_with_no_stats) { :error, :stats_disabled } """ - @spec stats(Cachex.t(), Keyword.t()) :: {status, map()} + @spec stats(Cachex.t(), Keyword.t()) :: map() | Cachex.error() def stats(cache, options \\ []) when is_list(options), do: Router.route(cache, {:stats, [options]}) @@ -1199,30 +1183,29 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "a", 1) iex> Cachex.put(:my_cache, "b", 2) iex> Cachex.put(:my_cache, "c", 3) - {:ok, true} + true - iex> :my_cache |> Cachex.stream! |> Enum.to_list + iex> :my_cache |> Cachex.stream |> Enum.to_list [{:entry, "b", 1519015801794, nil, 2}, {:entry, "c", 1519015805679, nil, 3}, {:entry, "a", 1519015794445, nil, 1}] iex> query = Cachex.Query.build(output: :key) - iex> :my_cache |> Cachex.stream!(query) |> Enum.to_list + iex> :my_cache |> Cachex.stream(query) |> Enum.to_list ["b", "c", "a"] iex> query = Cachex.Query.build(output: :value) - iex> :my_cache |> Cachex.stream!(query) |> Enum.to_list + iex> :my_cache |> Cachex.stream(query) |> Enum.to_list [2, 3, 1] iex> query = Cachex.Query.build(output: {:key, :value}) - iex> :my_cache |> Cachex.stream!(query) |> Enum.to_list + iex> :my_cache |> Cachex.stream(query) |> Enum.to_list [{"b", 2}, {"c", 3}, {"a", 1}] """ - @spec stream(Cachex.t(), any, Keyword.t()) :: {status, Enumerable.t()} - def stream(cache, query \\ Q.build(where: Q.unexpired()), options \\ []) - when is_list(options), - do: Router.route(cache, {:stream, [query, options]}) + @spec stream(Cachex.t(), any(), Keyword.t()) :: Enumerable.t() | Cachex.error() + def stream(cache, query \\ Q.build(where: Q.unexpired()), options \\ []) when is_list(options), + do: Router.route(cache, {:stream, [query, options]}) @doc """ Takes an entry from a cache. @@ -1234,16 +1217,16 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key", "value") iex> Cachex.take(:my_cache, "key") - { :ok, "value" } + "value" iex> Cachex.get(:my_cache, "key") - { :ok, nil } + nil iex> Cachex.take(:my_cache, "missing_key") - { :ok, nil } + nil """ - @spec take(Cachex.t(), any, Keyword.t()) :: {status, any} + @spec take(Cachex.t(), any(), Keyword.t()) :: any() def take(cache, key, options \\ []) when is_list(options), do: Router.route(cache, {:take, [key, options]}) @@ -1252,8 +1235,23 @@ defmodule Cachex do This is very similar to `refresh/3` except that the expiration time is maintained inside the record (using a calculated offset). + + ## Examples + + iex> Cachex.put(:my_cache, "my_key", "my_value", expire: :timer.seconds(5)) + iex> Process.sleep(4) + iex> Cachex.ttl(:my_cache, "my_key") + 1000 + + iex> Cachex.touch(:my_cache, "my_key") + iex> Cachex.ttl(:my_cache, "my_key") + 1000 + + iex> Cachex.touch(:my_cache, "missing_key") + false + """ - @spec touch(Cachex.t(), any, Keyword.t()) :: {status, boolean} + @spec touch(Cachex.t(), any(), Keyword.t()) :: boolean() def touch(cache, key, options \\ []) when is_list(options), do: Router.route(cache, {:touch, [key, options]}) @@ -1276,14 +1274,14 @@ defmodule Cachex do ...> val2 = Cachex.get(worker, "key2") ...> [val1, val2] ...> end) - { :ok, [ "value1", "value2" ] } + [ "value1", "value2" ] """ - @spec transaction(Cachex.t(), [any], function, Keyword.t()) :: {status, any} + @spec transaction(Cachex.t(), [any()], function(), Keyword.t()) :: any() def transaction(cache, keys, operation, options \\ []) when is_function(operation) and is_list(keys) and is_list(options) do Overseer.with(cache, fn cache -> - trans_cache = + enabled = case cache(cache, :transactions) do true -> cache @@ -1294,7 +1292,7 @@ defmodule Cachex do |> Overseer.update(&cache(&1, transactions: true)) end - Router.route(trans_cache, {:transaction, [keys, operation, options]}) + Router.route(enabled, {:transaction, [keys, operation, options]}) end) end @@ -1308,16 +1306,16 @@ defmodule Cachex do ## Examples iex> Cachex.ttl(:my_cache, "my_key") - { :ok, 13985 } + 13985 iex> Cachex.ttl(:my_cache, "my_key_with_no_ttl") - { :ok, nil } + nil iex> Cachex.ttl(:my_cache, "missing_key") - { :ok, nil } + nil """ - @spec ttl(Cachex.t(), any, Keyword.t()) :: {status, integer | nil} + @spec ttl(Cachex.t(), any(), Keyword.t()) :: integer() | nil def ttl(cache, key, options \\ []) when is_list(options), do: Router.route(cache, {:ttl, [key, options]}) @@ -1333,17 +1331,17 @@ defmodule Cachex do iex> Cachex.put(:my_cache, "key", "value") iex> Cachex.get(:my_cache, "key") - { :ok, "value" } + "value" iex> Cachex.update(:my_cache, "key", "new_value") iex> Cachex.get(:my_cache, "key") - { :ok, "new_value" } + "new_value" iex> Cachex.update(:my_cache, "missing_key", "new_value") - { :ok, false } + false """ - @spec update(Cachex.t(), any, any, Keyword.t()) :: {status, any} + @spec update(Cachex.t(), any(), any(), Keyword.t()) :: boolean() def update(cache, key, value, options \\ []) when is_list(options), do: Router.route(cache, {:update, [key, value, options]}) @@ -1372,19 +1370,19 @@ defmodule Cachex do ## Examples iex> Cachex.warm(:my_cache) - { :ok, [MyWarmer] } + [MyWarmer] iex> Cachex.warm(:my_cache, only: [MyWarmer]) - { :ok, [MyWarmer] } + [MyWarmer] iex> Cachex.warm(:my_cache, only: []) - { :ok, [] } + [] iex> Cachex.warm(:my_cache, wait: true) - { :ok, [MyWarmer]} + [MyWarmer] """ - @spec warm(Cachex.t(), Keyword.t()) :: {status, [atom()]} + @spec warm(Cachex.t(), Keyword.t()) :: [atom()] def warm(cache, options \\ []), do: Router.route(cache, {:warm, [options]}) @@ -1437,10 +1435,10 @@ defmodule Cachex do required = [only: Enum.map(req, &warmer(&1, :name)), wait: true] optional = [only: Enum.map(opt, &warmer(&1, :name)), wait: false] - with {:ok, _} <- Cachex.warm(cache, const(:notify_false) ++ required), - {:ok, _} <- Cachex.warm(cache, const(:notify_false) ++ optional) do - :ok - end + Cachex.warm(cache, const(:notify_false) ++ required) + Cachex.warm(cache, const(:notify_false) ++ optional) + + :ok end # Unwraps a command result into an unsafe form. @@ -1455,9 +1453,12 @@ defmodule Cachex do defp unwrap_unsafe({:error, value}) when is_binary(value), do: raise(Cachex.Error, message: value) - defp unwrap_unsafe({:error, %Cachex.Error{stack: stack} = e}), - do: reraise(e, stack) + defp unwrap_unsafe({:error, %Cachex.Error{stack: stack} = error}), + do: reraise(error, stack) defp unwrap_unsafe({_state, value}), do: value + + defp unwrap_unsafe(value), + do: value end diff --git a/lib/cachex/actions.ex b/lib/cachex/actions.ex index 2e34674c..18646bd4 100644 --- a/lib/cachex/actions.ex +++ b/lib/cachex/actions.ex @@ -88,16 +88,16 @@ defmodule Cachex.Actions do Note that updates are atomic; either all updates will take place, or none will. """ - @spec update(Cachex.t(), any, [tuple]) :: {:ok, boolean} + @spec update(Cachex.t(), any(), [tuple()]) :: boolean() def update(cache(name: name), key, changes), - do: {:ok, :ets.update_element(name, key, changes)} + do: :ets.update_element(name, key, changes) @doc """ Writes a new entry into a cache. """ - @spec write(Cachex.t(), Cachex.Spec.entries()) :: {:ok, boolean} + @spec write(Cachex.t(), Cachex.Spec.entries()) :: boolean() def write(cache(name: name), entries), - do: {:ok, :ets.insert(name, entries)} + do: :ets.insert(name, entries) && :ok @doc """ Returns the operation used for a write based on a prior value. diff --git a/lib/cachex/actions/clear.ex b/lib/cachex/actions/clear.ex index 84740786..63462e86 100644 --- a/lib/cachex/actions/clear.ex +++ b/lib/cachex/actions/clear.ex @@ -25,26 +25,11 @@ defmodule Cachex.Actions.Clear do """ def execute(cache(name: name) = cache, _options) do Locksmith.transaction(cache, [], fn -> - evicted = - cache - |> Size.execute([]) - |> handle_evicted + evicted = Size.execute(cache, []) true = :ets.delete_all_objects(name) evicted end) end - - ############### - # Private API # - ############### - - # Handles the result of a size() call. - # - # This just verifies that we can safely return a size. Being realistic, - # this will almost always hit the top case - the latter is just provided - # in order to avoid crashing if something goes totally wrong. - defp handle_evicted({:ok, _size} = res), - do: res end diff --git a/lib/cachex/actions/del.ex b/lib/cachex/actions/del.ex index 1e72a0cd..706f732e 100644 --- a/lib/cachex/actions/del.ex +++ b/lib/cachex/actions/del.ex @@ -13,7 +13,7 @@ defmodule Cachex.Actions.Del do @doc """ Removes an entry from a cache by key. - This command will always return a true value, signalling that the key no longer + This command will always return an :ok value, signalling that the key no longer exists in the cache (regardless of whether it previously existed). Removal runs in a lock aware context, to ensure that we're not removing a key @@ -21,7 +21,8 @@ defmodule Cachex.Actions.Del do """ def execute(cache(name: name) = cache, key, _options) do Locksmith.write(cache, [key], fn -> - {:ok, :ets.delete(name, key)} + :ets.delete(name, key) + :ok end) end end diff --git a/lib/cachex/actions/empty.ex b/lib/cachex/actions/empty.ex index 2b182e33..1ca6e0f4 100644 --- a/lib/cachex/actions/empty.ex +++ b/lib/cachex/actions/empty.ex @@ -24,8 +24,6 @@ defmodule Cachex.Actions.Empty do Internally this action is delegated through to the `size()` command and the returned numeric value is just "cast" to a boolean value. """ - def execute(cache() = cache, _options) do - {:ok, size} = Size.execute(cache, []) - {:ok, size == 0} - end + def execute(cache() = cache, _options), + do: Size.execute(cache, []) == 0 end diff --git a/lib/cachex/actions/exists.ex b/lib/cachex/actions/exists.ex index 5549b766..389f27d9 100644 --- a/lib/cachex/actions/exists.ex +++ b/lib/cachex/actions/exists.ex @@ -21,5 +21,5 @@ defmodule Cachex.Actions.Exists do `Cachex.Actions` module and just cast the result to a boolean. """ def execute(cache() = cache, key, _options), - do: {:ok, !!Actions.read(cache, key)} + do: !!Actions.read(cache, key) end diff --git a/lib/cachex/actions/expire.ex b/lib/cachex/actions/expire.ex index 67463001..1dcb8394 100644 --- a/lib/cachex/actions/expire.ex +++ b/lib/cachex/actions/expire.ex @@ -18,27 +18,12 @@ defmodule Cachex.Actions.Expire do @doc """ Sets the expiration time on a given cache entry. - If a negative expiration time is provided, the entry is immediately removed - from the cache (as it means we have already expired). If a positive expiration - time is provided, we update the touch time on the entry and update the expiration - to the one provided. - - If the expiration provided is nil, we need to remove the expiration; so we update - in the exact same way. This is done passively due to the fact that Erlang term order - determines that `nil > -1 == true`. - This command executes inside a lock aware context to ensure that the key isn't currently being used/modified/removed from another process in the application. """ def execute(cache() = cache, key, expiration, _options) do Locksmith.write(cache, [key], fn -> - case expiration > -1 do - true -> - Actions.update(cache, key, entry_mod_now(expiration: expiration)) - - false -> - Cachex.del(cache, key, const(:purge_override)) - end + Actions.update(cache, key, entry_mod_now(expiration: max(0, expiration))) end) end end diff --git a/lib/cachex/actions/export.ex b/lib/cachex/actions/export.ex index 075640fa..3cce34c4 100644 --- a/lib/cachex/actions/export.ex +++ b/lib/cachex/actions/export.ex @@ -24,8 +24,8 @@ defmodule Cachex.Actions.Export do query = Query.build() options = const(:local) ++ const(:notify_false) - with {:ok, stream} <- Cachex.stream(cache, query, options) do - {:ok, Enum.to_list(stream)} - end + cache + |> Cachex.stream(query, options) + |> Enum.to_list() end end diff --git a/lib/cachex/actions/fetch.ex b/lib/cachex/actions/fetch.ex index 72d27f79..eb1ef7dd 100644 --- a/lib/cachex/actions/fetch.ex +++ b/lib/cachex/actions/fetch.ex @@ -30,7 +30,7 @@ defmodule Cachex.Actions.Fetch do placed in the cache in order to allow read-through caches. """ def execute(cache() = cache, key, fallback, _options) do - with {:ok, nil} <- Get.execute(cache, key, []) do + with :"$fetch" <- Get.execute(cache, key, :"$fetch", []) do Courier.dispatch(cache, key, generate_task(fallback, key)) end end diff --git a/lib/cachex/actions/get.ex b/lib/cachex/actions/get.ex index 0a3425b1..282cd594 100644 --- a/lib/cachex/actions/get.ex +++ b/lib/cachex/actions/get.ex @@ -18,13 +18,13 @@ defmodule Cachex.Actions.Get do @doc """ Retrieves a value from inside the cache. """ - def execute(cache() = cache, key, _options) do + def execute(cache() = cache, key, default, _options) do case Actions.read(cache, key) do entry(value: value) -> - {:ok, value} + value nil -> - {:ok, nil} + default end end end diff --git a/lib/cachex/actions/get_and_update.ex b/lib/cachex/actions/get_and_update.ex index 306a363c..08517af1 100644 --- a/lib/cachex/actions/get_and_update.ex +++ b/lib/cachex/actions/get_and_update.ex @@ -27,13 +27,13 @@ defmodule Cachex.Actions.GetAndUpdate do value in the cache directly. If it does exist, then we use the update actions to update the existing record. """ - def execute(cache() = cache, key, update_fun, _options) do + def execute(cache() = cache, key, updater, default, _options) do Locksmith.transaction(cache, [key], fn -> - {_label, value} = Cachex.get(cache, key, []) + value = Cachex.get(cache, key, default) formatted = value - |> update_fun.() + |> updater.() |> Actions.format_fetch_value() operation = Actions.write_op(value) diff --git a/lib/cachex/actions/import.ex b/lib/cachex/actions/import.ex index 7b21dea1..6fb1b4cf 100644 --- a/lib/cachex/actions/import.ex +++ b/lib/cachex/actions/import.ex @@ -20,7 +20,7 @@ defmodule Cachex.Actions.Import do a large import set. """ def execute(cache() = cache, entries, _options), - do: {:ok, Enum.reduce(entries, 0, &import(cache, &1, &2, now()))} + do: Enum.reduce(entries, 0, &import(cache, &1, &2, now())) ############### # Private API # @@ -31,7 +31,7 @@ defmodule Cachex.Actions.Import do # As this is a direct import, we just use `Cachex.put/4` with the provided # key and value from the existing entry record - nothing special here. defp import(cache, entry(key: k, expiration: nil, value: v), c, _t) do - Cachex.put!(cache, k, v, const(:notify_false)) + Cachex.put(cache, k, v, const(:notify_false)) c + 1 end @@ -49,7 +49,7 @@ defmodule Cachex.Actions.Import do # import time, so that the rest of the lifetime of the key is the same. If # we didn't do this, the key would live longer in the cache than intended. defp import(cache, entry(key: k, modified: m, expiration: e, value: v), c, t) do - Cachex.put!(cache, k, v, const(:notify_false) ++ [expire: m + e - t]) + Cachex.put(cache, k, v, const(:notify_false) ++ [expire: m + e - t]) c + 1 end end diff --git a/lib/cachex/actions/incr.ex b/lib/cachex/actions/incr.ex index fd3253cc..e66a6637 100644 --- a/lib/cachex/actions/incr.ex +++ b/lib/cachex/actions/incr.ex @@ -39,7 +39,7 @@ defmodule Cachex.Actions.Incr do Locksmith.write(cache, [key], fn -> try do - {:ok, :ets.update_counter(name, key, modify, default)} + :ets.update_counter(name, key, modify, default) rescue _ -> error(:non_numeric_value) end diff --git a/lib/cachex/actions/inspect.ex b/lib/cachex/actions/inspect.ex index 1fde0ad9..3326fa71 100644 --- a/lib/cachex/actions/inspect.ex +++ b/lib/cachex/actions/inspect.ex @@ -63,7 +63,7 @@ defmodule Cachex.Actions.Inspect do # This is relatively easy to get via other methods, but it's available here # as the "best" way for a developer to do so (outside of the internal API). def execute(cache(name: name), :cache, _options), - do: {:ok, Overseer.retrieve(name)} + do: Overseer.lookup(name) # Retrieves a raw entry from the cache table. # @@ -72,8 +72,8 @@ defmodule Cachex.Actions.Inspect do # are not taken into account (either lazily or otherwise) on this read call. def execute(cache(name: name), {:entry, key}, _options) do case :ets.lookup(name, key) do - [] -> {:ok, nil} - [e] -> {:ok, e} + [] -> nil + [e] -> e end end @@ -86,7 +86,7 @@ defmodule Cachex.Actions.Inspect do filter = Query.expired() clause = Query.build(where: filter, output: true) - {:ok, :ets.select_count(name, clause)} + :ets.select_count(name, clause) end # Returns the keys of expired entries currently inside the cache. @@ -98,7 +98,7 @@ defmodule Cachex.Actions.Inspect do filter = Query.expired() clause = Query.build(where: filter, output: :key) - {:ok, :ets.select(name, clause)} + :ets.select(name, clause) end # Returns information about the last run of the Janitor service. @@ -115,18 +115,17 @@ defmodule Cachex.Actions.Inspect do # # This should be treated as an estimation as it's rounded based on # the number of words used to maintain the cache. - def execute(cache() = cache, {:memory, :bytes}, options) do - {:ok, mem_words} = execute(cache, {:memory, :words}, options) - {:ok, mem_words * :erlang.system_info(:wordsize)} - end + def execute(cache() = cache, {:memory, :bytes}, options), + do: execute(cache, {:memory, :words}, options) * :erlang.system_info(:wordsize) # Retrieves the current size of the backing cache table in a readable format. # # This should be treated as an estimation as it's rounded based on the number # of words used to maintain the cache. def execute(cache() = cache, {:memory, :binary}, options) do - {:ok, bytes} = execute(cache, {:memory, :bytes}, options) - {:ok, bytes_to_readable(bytes)} + cache + |> execute({:memory, :bytes}, options) + |> bytes_to_readable end # Retrieves the current size of the backing cache table in machine words. @@ -134,7 +133,7 @@ defmodule Cachex.Actions.Inspect do # It's unlikely the caller will want to use this directly, but as it's used # by other inspection methods there's no harm in exposing it in the API. def execute(cache(name: name), {:memory, :words}, _options), - do: {:ok, :ets.info(name, :memory)} + do: :ets.info(name, :memory) # Catch-all to return an error. def execute(_cache, _option, _options), diff --git a/lib/cachex/actions/invoke.ex b/lib/cachex/actions/invoke.ex index 2de9a5ce..163c0281 100644 --- a/lib/cachex/actions/invoke.ex +++ b/lib/cachex/actions/invoke.ex @@ -26,10 +26,10 @@ defmodule Cachex.Actions.Invoke do to a custom command for a given key, and based on the type of command might be written back into the cache table. """ - def execute(cache(commands: commands) = cache, cmd, key, _options) do + def execute(cache(commands: commands) = cache, cmd, key, default, _options) do commands |> Map.get(cmd) - |> invoke(cache, key) + |> invoke(cache, key, default) end ############### @@ -41,9 +41,10 @@ defmodule Cachex.Actions.Invoke do # Values read back will be passed directly to the custom command implementation. # It should be noted that expirations are taken into account, and nil will be # passed through in expired/missing cases. - defp invoke(command(type: :read, execute: exec), cache, key) do - {_status_, value} = Cachex.get(cache, key, []) - {:ok, exec.(value)} + defp invoke(command(type: :read, execute: exec), cache, key, default) do + cache + |> Cachex.get(key, default) + |> exec.() end # Executes a write command on the backing cache table. @@ -52,23 +53,27 @@ defmodule Cachex.Actions.Invoke do # kept in sync with other actions happening at the same time. The return format # is enforced per the documentation and will crash out if something unexpected # is returned (i.e. a non-Tuple, or a Tuple with invalid size). - defp invoke(command(type: :write, execute: exec), cache() = cache, key) do + defp invoke(command(type: :write, execute: exec), cache() = cache, key, default) do Locksmith.transaction(cache, [key], fn -> - {_label, value} = Cachex.get(cache, key, []) - {return, tempv} = exec.(value) + temporary = + cache + |> Cachex.get(key, default) + |> exec.() + |> Actions.format_fetch_value() + |> Actions.normalize_commit() - tempv == value || - apply( - Cachex, - Actions.write_op(value), - [cache, key, tempv, []] - ) + case temporary do + {:commit, {read, write}, options} -> + apply(Cachex, Actions.write_op(write), [cache, key, write, options]) + read - {:ok, return} + {:ignore, read} -> + read + end end) end # Returns an error due to a missing command. - defp invoke(_invalid, _cache, _key), + defp invoke(_invalid, _cache, _key, _default), do: error(:invalid_command) end diff --git a/lib/cachex/actions/keys.ex b/lib/cachex/actions/keys.ex index d56e61bc..2fe1a58b 100644 --- a/lib/cachex/actions/keys.ex +++ b/lib/cachex/actions/keys.ex @@ -27,6 +27,6 @@ defmodule Cachex.Actions.Keys do filter = Query.unexpired() clause = Query.build(where: filter, output: :key) - {:ok, :ets.select(name, clause)} + :ets.select(name, clause) end end diff --git a/lib/cachex/actions/prune.ex b/lib/cachex/actions/prune.ex index a5d3b958..1a32a7ac 100644 --- a/lib/cachex/actions/prune.ex +++ b/lib/cachex/actions/prune.ex @@ -8,7 +8,6 @@ defmodule Cachex.Actions.Prune do # # This command is used by the various limit hooks provided by Cachex. alias Cachex.Query - alias Cachex.Services.Informant # add required imports import Cachex.Spec @@ -37,19 +36,16 @@ defmodule Cachex.Actions.Prune do reclaim = Keyword.get(options, :reclaim, 0.1) reclaim_bound = round(size * reclaim) - case Cachex.size!(cache, const(:local) ++ const(:notify_false)) do + case Cachex.size(cache, const(:local) ++ const(:notify_false)) do cache_size when cache_size <= size -> - notify_worker(0, cache) + 0 cache_size -> cache_size |> calculate_reclaim(size, reclaim_bound) |> calculate_poffset(cache) |> erase_lower_bound(cache, buffer) - |> notify_worker(cache) end - - {:ok, true} end ############### @@ -73,7 +69,7 @@ defmodule Cachex.Actions.Prune do # the reclaim space, meaning that a positive result require us to carry out # further evictions manually down the chain. defp calculate_poffset(reclaim_space, cache) when reclaim_space > 0, - do: reclaim_space - Cachex.purge!(cache, const(:local)) + do: reclaim_space - Cachex.purge(cache, const(:local)) # Erases the least recently written records up to the offset limit. # @@ -86,43 +82,22 @@ defmodule Cachex.Actions.Prune do # which only selects the key and touch time as a minor optimization. The key is # naturally required when it comes to removing the document, and the touch time is # used to determine the sort order required for LRW. - defp erase_lower_bound(offset, cache, buffer) when offset > 0 do + defp erase_lower_bound(offset, cache(name: name) = cache, buffer) when offset > 0 do options = :local |> const() |> Enum.concat(const(:notify_false)) |> Enum.concat(buffer: buffer) - with {:ok, stream} <- Cachex.stream(cache, @query, options) do - cache(name: name) = cache - - stream - |> Enum.sort(fn {_k1, t1}, {_k2, t2} -> t1 < t2 end) - |> Enum.take(offset) - |> Enum.each(fn {k, _t} -> :ets.delete(name, k) end) + cache + |> Cachex.stream(@query, options) + |> Enum.sort(fn {_k1, t1}, {_k2, t2} -> t1 < t2 end) + |> Enum.take(offset) + |> Enum.each(fn {k, _t} -> :ets.delete(name, k) end) - offset - end + offset end - defp erase_lower_bound(offset, _state, _buffer), - do: offset - - # Broadcasts the number of removed entries to the cache hooks. - # - # If the offset is not positive we didn't have to remove anything and so we - # don't broadcast any results. An 0 Tuple is returned just to keep compatibility - # with the response type from `Informant.broadcast/3`. - # - # It should be noted that we use a `:clear` action here as these evictions are - # based on size and not on expiration. The evictions done during the purge earlier - # in the pipeline are reported separately and we're only reporting the delta at this - # point in time. Therefore remember that it's important that we're ignoring the - # results of `clear()` and `purge()` calls in this hook, otherwise we would end - # up in a recursive loop due to the hook system. - defp notify_worker(offset, state) when offset > 0, - do: Informant.broadcast(state, {:clear, [[]]}, {:ok, offset}) - - defp notify_worker(_offset, _state), - do: :ok + defp erase_lower_bound(_offset, _state, _buffer), + do: 0 end diff --git a/lib/cachex/actions/purge.ex b/lib/cachex/actions/purge.ex index 6b8b75b6..73f97be6 100644 --- a/lib/cachex/actions/purge.ex +++ b/lib/cachex/actions/purge.ex @@ -32,7 +32,7 @@ defmodule Cachex.Actions.Purge do filter = Query.expired() clause = Query.build(where: filter, output: true) - {:ok, :ets.select_delete(name, clause)} + :ets.select_delete(name, clause) end) end end diff --git a/lib/cachex/actions/put_many.ex b/lib/cachex/actions/put_many.ex index 9d249240..ebeacaf9 100644 --- a/lib/cachex/actions/put_many.ex +++ b/lib/cachex/actions/put_many.ex @@ -56,7 +56,7 @@ defmodule Cachex.Actions.PutMany do end defp map_entries(_exp, [], [], _entries), - do: {:ok, false} + do: :ok defp map_entries(_exp, [], keys, entries), do: {:ok, keys, entries} diff --git a/lib/cachex/actions/reset.ex b/lib/cachex/actions/reset.ex index 8cfe4c5c..93ddb466 100644 --- a/lib/cachex/actions/reset.ex +++ b/lib/cachex/actions/reset.ex @@ -40,7 +40,7 @@ defmodule Cachex.Actions.Reset do reset_cache(cache, only, options) reset_hooks(cache, only, options) - {:ok, true} + :ok end) end diff --git a/lib/cachex/actions/save.ex b/lib/cachex/actions/save.ex index a61578cc..2d88f6c6 100644 --- a/lib/cachex/actions/save.ex +++ b/lib/cachex/actions/save.ex @@ -30,19 +30,14 @@ defmodule Cachex.Actions.Save do file = File.open!(path, [:write, :compressed]) buffer = Options.get(options, :buffer, &is_positive_integer/1, 25) - {:ok, stream} = - options - |> Keyword.get(:local) - |> init_stream(router, cache, buffer) - - stream + options + |> Keyword.get(:local) + |> init_stream(router, cache, buffer) |> Stream.chunk_every(buffer) |> Stream.map(&handle_batch/1) |> Enum.each(&IO.binwrite(file, &1)) - with :ok <- File.close(file) do - {:ok, true} - end + File.close(file) rescue File.Error -> error(:unreachable_file) end @@ -52,8 +47,7 @@ defmodule Cachex.Actions.Save do ############### # Use a local stream to lazily walk through records on a local cache. - defp init_stream(local, router, cache, buffer) - when local or router == Local do + defp init_stream(local, router, cache, buffer) when local or router == Local do options = :local |> const() diff --git a/lib/cachex/actions/size.ex b/lib/cachex/actions/size.ex index 25c3488f..0a07f455 100644 --- a/lib/cachex/actions/size.ex +++ b/lib/cachex/actions/size.ex @@ -37,13 +37,13 @@ defmodule Cachex.Actions.Size do # Retrieve the full table count. defp retrieve_count(true, name), - do: {:ok, :ets.info(name, :size)} + do: :ets.info(name, :size) # Retrieve only the unexpired table count. defp retrieve_count(false, name) do filter = Query.unexpired() clause = Query.build(where: filter, output: true) - {:ok, :ets.select_count(name, clause)} + :ets.select_count(name, clause) end end diff --git a/lib/cachex/actions/stats.ex b/lib/cachex/actions/stats.ex index 4ca2df4d..3a4c858d 100644 --- a/lib/cachex/actions/stats.ex +++ b/lib/cachex/actions/stats.ex @@ -20,22 +20,19 @@ defmodule Cachex.Actions.Stats do If the provided cache does not have statistics enabled, an error will be returned. """ - @spec execute(Cachex.t(), Keyword.t()) :: - {:ok, %{}} | {:error, :stats_disabled} def execute(cache() = cache, _options) do - with {:ok, stats} <- Stats.for_cache(cache) do + with %{} = stats <- Stats.for_cache(cache) do hits_count = Map.get(stats, :hits, 0) miss_count = Map.get(stats, :misses, 0) case hits_count + miss_count do 0 -> - {:ok, stats} + stats v -> v |> generate_rates(hits_count, miss_count) |> Map.merge(stats) - |> wrap(:ok) end end end diff --git a/lib/cachex/actions/stream.ex b/lib/cachex/actions/stream.ex index 72cb1b10..67952136 100644 --- a/lib/cachex/actions/stream.ex +++ b/lib/cachex/actions/stream.ex @@ -36,7 +36,6 @@ defmodule Cachex.Actions.Stream do options |> Options.get(:buffer, &is_positive_integer/1, 25) |> init_stream(name, spec) - |> wrap(:ok) {:error, _result} -> error(:invalid_match) diff --git a/lib/cachex/actions/take.ex b/lib/cachex/actions/take.ex index e2540ac2..b5b94a5b 100644 --- a/lib/cachex/actions/take.ex +++ b/lib/cachex/actions/take.ex @@ -50,7 +50,7 @@ defmodule Cachex.Actions.Take do defp handle_take([entry(value: value) = entry], cache) do case Janitor.expired?(cache, entry) do false -> - {:ok, value} + value true -> Informant.broadcast( @@ -59,10 +59,10 @@ defmodule Cachex.Actions.Take do const(:purge_override_result) ) - {:ok, nil} + nil end end defp handle_take([], _cache), - do: {:ok, nil} + do: nil end diff --git a/lib/cachex/actions/touch.ex b/lib/cachex/actions/touch.ex index 47fa3609..94e30cfc 100644 --- a/lib/cachex/actions/touch.ex +++ b/lib/cachex/actions/touch.ex @@ -42,7 +42,7 @@ defmodule Cachex.Actions.Touch do # If the expiration if unset, we update just the touch time insude the entry # as we don't have to account for the offset. If an expiration is set, we # also update the expiration on the record to be the returned offset. - defp handle_expiration({:ok, value}, cache, key) do + defp handle_expiration(value, cache, key) do Actions.update( cache, key, diff --git a/lib/cachex/actions/transaction.ex b/lib/cachex/actions/transaction.ex index ffbb52e6..8c6ba533 100644 --- a/lib/cachex/actions/transaction.ex +++ b/lib/cachex/actions/transaction.ex @@ -19,14 +19,13 @@ defmodule Cachex.Actions.Transaction do Executes a transaction against the cache. The Locksmith does most of the work here, we just provide the cache state - to the user-defined function. The results are wrapped in an `:ok` tagged - Tuple just to protect against internally unwrapped values from bang functions. + to the user-defined function, with handles on arity for convenience. """ def execute(cache() = cache, keys, operation, _options) do Locksmith.transaction(cache, keys, fn -> case :erlang.fun_info(operation)[:arity] do - 0 -> {:ok, operation.()} - 1 -> {:ok, operation.(cache)} + 0 -> operation.() + 1 -> operation.(cache) end end) end diff --git a/lib/cachex/actions/ttl.ex b/lib/cachex/actions/ttl.ex index 0d8625ed..08b5f01c 100644 --- a/lib/cachex/actions/ttl.ex +++ b/lib/cachex/actions/ttl.ex @@ -25,10 +25,10 @@ defmodule Cachex.Actions.Ttl do def execute(cache() = cache, key, _options) do case Actions.read(cache, key) do entry(modified: modified, expiration: exp) when not is_nil(exp) -> - {:ok, modified + exp - now()} + modified + exp - now() _anything_else -> - {:ok, nil} + nil end end end diff --git a/lib/cachex/actions/warm.ex b/lib/cachex/actions/warm.ex index 58331296..365990a8 100644 --- a/lib/cachex/actions/warm.ex +++ b/lib/cachex/actions/warm.ex @@ -27,14 +27,11 @@ defmodule Cachex.Actions.Warm do only = Keyword.get(options, :only, nil) wait = Keyword.get(options, :wait, false) - warmed = - warmers - |> Enum.filter(&filter_mod(&1, only)) - |> Enum.map(&spawn_call(&1, wait)) - |> Task.yield_many(:infinity) - |> Enum.map(&extract_name/1) - - {:ok, warmed} + warmers + |> Enum.filter(&filter_mod(&1, only)) + |> Enum.map(&spawn_call(&1, wait)) + |> Task.yield_many(:infinity) + |> Enum.map(&extract_name/1) end ############### diff --git a/lib/cachex/limit/accessed.ex b/lib/cachex/limit/accessed.ex index 00a95ec3..ad3a0500 100644 --- a/lib/cachex/limit/accessed.ex +++ b/lib/cachex/limit/accessed.ex @@ -53,7 +53,7 @@ defmodule Cachex.Limit.Accessed do # This will update the modification time of a key if tracked in a successful cache # action. In combination with LRW caching, this provides a simple LRU policy. def handle_notify({_action, [key | _]}, _result, cache) do - {:ok, _} = Cachex.touch(cache, key) + true = Cachex.touch(cache, key) {:ok, cache} end diff --git a/lib/cachex/limit/evented.ex b/lib/cachex/limit/evented.ex index 28f8f54c..d340375e 100644 --- a/lib/cachex/limit/evented.ex +++ b/lib/cachex/limit/evented.ex @@ -18,9 +18,6 @@ defmodule Cachex.Limit.Evented do """ use Cachex.Hook - # actions which didn't trigger - @ignored [:error, :ignore] - ###################### # Hook Configuration # ###################### @@ -64,16 +61,16 @@ defmodule Cachex.Limit.Evented do # # Note that this will ignore error results and only operates on actions which are # able to cause a net gain in cache size (so removals are also ignored). - def handle_notify(_message, {status, _value}, {cache, {size, options}} = opts) - when status not in @ignored do - {:ok, true} = Cachex.prune(cache, size, options) + def handle_notify(_message, {status, _value}, opts) when status in [:error, :ignore], + do: {:ok, opts} + + def handle_notify(_message, _result, {cache, {size, options}} = opts) do + Cachex.prune(cache, size, options) {:ok, opts} end - def handle_notify(_message, _result, opts), - do: {:ok, opts} - @doc false + # Receives a provisioned cache instance. # # The provided cache is then stored in the cache and used for cache calls going diff --git a/lib/cachex/limit/scheduled.ex b/lib/cachex/limit/scheduled.ex index 0ac87034..e3e4996c 100644 --- a/lib/cachex/limit/scheduled.ex +++ b/lib/cachex/limit/scheduled.ex @@ -52,8 +52,9 @@ defmodule Cachex.Limit.Scheduled do # # This will execute a bounds check on a cache and schedule a new check. def handle_info(:policy_check, {cache, {size, options, scheduling}} = args) do - {:ok, true} = Cachex.prune(cache, size, options) - schedule(scheduling) && {:noreply, args} + Cachex.prune(cache, size, options) + schedule(scheduling) + {:noreply, args} end @doc false diff --git a/lib/cachex/router.ex b/lib/cachex/router.ex index 17536180..3890d460 100644 --- a/lib/cachex/router.ex +++ b/lib/cachex/router.ex @@ -93,8 +93,16 @@ defmodule Cachex.Router do def route(cache(router: router(module: Router.Local)) = cache, module, call), do: route_local(cache, module, call) - def route(cache() = cache, module, call), - do: route_cluster(cache, module, call) + def route(cache() = cache, module, {_action, arguments} = call) do + # all calls should have options + options = List.last(arguments) + + # can force local node with local: true + case Keyword.get(options, :local) do + true -> route_local(cache, module, call) + _any -> route_cluster(cache, module, call) + end + end @doc """ Dispatches a call to an appropriate execution environment. @@ -105,21 +113,18 @@ defmodule Cachex.Router do """ defmacro route(cache, {action, _arguments} = call) do # coveralls-ignore-start - act_name = + name = action |> Kernel.to_string() |> String.replace_trailing("?", "") |> Macro.camelize() - act_join = :"Elixir.Cachex.Actions.#{act_name}" + module = :"Elixir.Cachex.Actions.#{name}" # coveralls-ignore-stop quote do Overseer.with(unquote(cache), fn cache -> - call = unquote(call) - module = unquote(act_join) - - Router.route(cache, module, call) + Router.route(cache, unquote(module), unquote(call)) end) end end @@ -137,8 +142,11 @@ defmodule Cachex.Router do # - Booleans are always AND-ed. # - Maps are always merged (recursively). # - # This has to be public due to scopes, but we hide the docs - # because we don't really care for anybody else calling it. + # Any :ok types are just matched to cause a crash otherwise, + # until we figure out a better way to handle them... + defp result_merge(:ok, :ok), + do: :ok + defp result_merge(left, right) when is_list(left), do: left ++ right @@ -216,8 +224,7 @@ defmodule Cachex.Router do # the total number of slots available (i.e. the count of the nodes). If it comes # out to the local node, just execute the local code, otherwise RPC the base call # to the remote node, and just assume that it'll correctly handle it. - defp route_cluster(cache, module, {action, [key | _]} = call) - when action in @keyed_actions do + defp route_cluster(cache, module, {action, [key | _]} = call) when action in @keyed_actions do cache(router: router(module: router, state: nodes)) = cache route_node(cache, module, call, router.route(nodes, key)) end @@ -225,7 +232,6 @@ defmodule Cachex.Router do # actions which merge outputs @merge_actions [ :clear, - :count, :empty?, :export, :import, @@ -241,50 +247,22 @@ defmodule Cachex.Router do # them with the results on the local node. The hooks will only be notified # on the local node, due to an annoying recursion issue when handling the # same across all nodes - seems to provide better logic though. - defp route_cluster(cache, module, {action, arguments} = call) - when action in @merge_actions do + defp route_cluster(cache, module, {action, arguments} = call) when action in @merge_actions do # fetch the nodes from the cluster state cache(router: router(module: router, state: state)) = cache - # all calls have options we can use - options = List.last(arguments) + # execution on the local node to combine + result = route_local(cache, module, call) - # can force local node setting local: true - results = - case Keyword.get(options, :local) do - true -> - [] - - _any -> - # don't want to execute on the local node - other_nodes = - state - |> router.nodes() - |> List.delete(node()) - - # execute the call on all other nodes - {results, _} = - :rpc.multicall( - other_nodes, - module, - :execute, - [cache | arguments] - ) - - results - end - - # execution on the local node, using the local macros and then unpack - {:ok, result} = route_local(cache, module, call) - - # results merge - merge_result = - results - |> Enum.map(&elem(&1, 1)) - |> Enum.reduce(result, &result_merge/2) - - # return after merge - {:ok, merge_result} + # execute the call on all other nodes + {results, _} = + state + |> router.nodes() + |> List.delete(node()) + |> :rpc.multicall(module, :execute, [cache | arguments]) + + # run result merging to blend both sets + Enum.reduce(results, result, &result_merge/2) end # actions which always run locally @@ -300,9 +278,8 @@ defmodule Cachex.Router do # Provides handling of `:inspect` operations. # # These operations are guaranteed to run on the local nodes. - defp route_cluster(cache, module, {action, _arguments} = call) - when action in @local_actions, - do: route_local(cache, module, call) + defp route_cluster(cache, module, {action, _arguments} = call) when action in @local_actions, + do: route_local(cache, module, call) # Provides handling of `:put_many` operations. # @@ -319,25 +296,10 @@ defmodule Cachex.Router do defp route_cluster(cache, module, {:transaction, [_keys | _]} = call), do: route_batch(cache, module, call, & &1) - # Any other actions are only available with local: true in the call - defp route_cluster(cache, module, {_action, arguments} = call) do - # all calls have options we can use - options = List.last(arguments) - - # can force local node setting local: true - case Keyword.get(options, :local) do - true -> route_local(cache, module, call) - _any -> error(:non_distributed) - end - end - - # coveralls-ignore-start # Catch-all just in case we missed something... defp route_cluster(_cache, _module, _call), do: error(:non_distributed) - # coveralls-ignore-stop - # Calls a slot for the provided cache action if all keys slot to the same node. # # This is a delegate handler for `route_node/4`, but ensures that all keys slot to the @@ -363,12 +325,11 @@ defmodule Cachex.Router do # # This will determine a local slot and delegate locally if so, bypassing # any RPC calls in order to gain a slight bit of performance. - defp route_node(cache, module, {action, arguments} = call, node) do - current = node() - cache(name: name) = cache + defp route_node(cache(name: name) = cache, module, {action, arguments} = call, node) do + here = node() case node do - ^current -> + ^here -> route_local(cache, module, call) targeted -> diff --git a/lib/cachex/services/courier.ex b/lib/cachex/services/courier.ex index 55be0533..104d9c48 100644 --- a/lib/cachex/services/courier.ex +++ b/lib/cachex/services/courier.ex @@ -70,8 +70,8 @@ defmodule Cachex.Services.Courier do {:noreply, {cache, Map.put(tasks, key, {pid, [caller | listeners]})}} nil -> - case Get.execute(cache, key, []) do - {:ok, nil} -> + case Get.execute(cache, key, :"$fetch", []) do + :"$fetch" -> parent = self() worker = @@ -106,8 +106,8 @@ defmodule Cachex.Services.Courier do {:noreply, {cache, Map.put(tasks, key, {worker, [caller]})}} - {:ok, _value} = res -> - {:reply, res, state} + result -> + {:reply, result, state} end end end @@ -138,7 +138,7 @@ defmodule Cachex.Services.Courier do result = with {:commit, value} <- result do - {:ok, value} + value end for caller <- children do diff --git a/lib/cachex/services/janitor.ex b/lib/cachex/services/janitor.ex index 7449e533..382d4b3b 100644 --- a/lib/cachex/services/janitor.ex +++ b/lib/cachex/services/janitor.ex @@ -70,7 +70,7 @@ defmodule Cachex.Services.Janitor do If the service is disabled on the cache, an error is returned. """ - @spec last_run(Cachex.t()) :: %{} + @spec last_run(Cachex.t()) :: %{} | Cachex.error() def last_run(cache(expiration: expiration(interval: nil))), do: error(:janitor_disabled) @@ -99,7 +99,7 @@ defmodule Cachex.Services.Janitor do # # The returned information should be treated as non-guaranteed. def handle_call(:last, _ctx, {_cache, last} = state), - do: {:reply, {:ok, last}, state} + do: {:reply, last, state} @doc false # Executes an expiration cleanup against a cache table. @@ -110,7 +110,7 @@ defmodule Cachex.Services.Janitor do started = now() options = const(:local) ++ const(:notify_false) - {duration, {:ok, count}} = + {duration, count} = :timer.tc(fn -> query = Query.build( @@ -154,7 +154,7 @@ defmodule Cachex.Services.Janitor do do: Cachex.purge(cache, const(:local)) defp handle_skip_check(true, _cache), - do: {:ok, 0} + do: 0 # Schedules a check to occur after the designated interval. Once scheduled, # returns the state - this is just sugar for pipelining with a state. diff --git a/lib/cachex/services/overseer.ex b/lib/cachex/services/overseer.ex index 68d54df9..3cea8de6 100644 --- a/lib/cachex/services/overseer.ex +++ b/lib/cachex/services/overseer.ex @@ -11,14 +11,12 @@ defmodule Cachex.Services.Overseer do # this new design. Cache states are stored in a single ETS table backing this # module and all cache calls will be routed through here first to ensure their # state is up to date. - import Cachex.Error import Cachex.Spec # add any aliases alias Cachex.Services # add service aliases - alias Services.Overseer alias Services.Steward # constants for manager/table names @@ -52,22 +50,6 @@ defmodule Cachex.Services.Overseer do ) end - @doc """ - Retrieves a cache from a name or record. - - Retrieving a cache will map the provided argument to a - cache record if available, otherwise a nil value. - """ - @spec get(Cachex.t()) :: Cachex.t() | nil - def get(cache() = cache), - do: cache - - def get(name) when is_atom(name), - do: retrieve(name) - - def get(_miss), - do: nil - @doc """ Determines whether a cache is known by the Overseer. """ @@ -76,17 +58,16 @@ defmodule Cachex.Services.Overseer do do: :ets.member(@table_name, name) @doc """ - Registers a cache record against a name. - """ - @spec register(atom, Cachex.t()) :: true - def register(name, cache() = cache) when is_atom(name), - do: :ets.insert(@table_name, {name, cache}) + Retrieves a cache from a name or record. - @doc """ - Retrieves a cache record, or `nil` if none exists. + Retrieving a cache will map the provided argument to a + cache record if available, otherwise a nil value. """ - @spec retrieve(atom) :: Cachex.t() | nil - def retrieve(name) do + @spec lookup(Cachex.t()) :: Cachex.t() | nil + def lookup(cache() = cache), + do: cache + + def lookup(name) when is_atom(name) do case :ets.lookup(@table_name, name) do [{^name, state}] -> state @@ -96,6 +77,16 @@ defmodule Cachex.Services.Overseer do end end + def lookup(_any), + do: nil + + @doc """ + Registers a cache record against a name. + """ + @spec register(atom, Cachex.t()) :: true + def register(name, cache() = cache) when is_atom(name), + do: :ets.insert(@table_name, {name, cache}) + @doc """ Determines whether the Overseer has been started. """ @@ -126,7 +117,7 @@ defmodule Cachex.Services.Overseer do @spec update(atom, Cachex.t() | (Cachex.t() -> Cachex.t())) :: Cachex.t() def update(name, fun) when is_atom(name) and is_function(fun, 1) do transaction(name, fn -> - cstate = retrieve(name) + cstate = lookup(name) nstate = fun.(cstate) register(name, nstate) @@ -145,16 +136,12 @@ defmodule Cachex.Services.Overseer do """ @spec with(cache :: Cachex.t(), (cache :: Cachex.t() -> any)) :: any def with(cache, handler) do - case Overseer.get(cache) do - nil -> - error(:no_cache) - - cache(name: name) = cache -> - if :erlang.whereis(name) != :undefined do - handler.(cache) - else - error(:no_cache) - end + state = lookup(cache) + + if state == nil do + raise ArgumentError, "no cache available: #{inspect(cache)}" end + + handler.(state) end end diff --git a/lib/cachex/spec.ex b/lib/cachex/spec.ex index 45c608f3..340dc7f7 100644 --- a/lib/cachex/spec.ex +++ b/lib/cachex/spec.ex @@ -328,7 +328,7 @@ defmodule Cachex.Spec do # Constant to override purge results defmacro const(:purge_override_result), - do: quote(do: {:ok, 1}) + do: quote(do: 1) # Constant to override purge calls defmacro const(:purge_override), diff --git a/lib/cachex/stats.ex b/lib/cachex/stats.ex index e87d3901..c4dd3962 100644 --- a/lib/cachex/stats.ex +++ b/lib/cachex/stats.ex @@ -33,7 +33,7 @@ defmodule Cachex.Stats do @doc """ Retrieves the latest statistics for a cache. """ - @spec for_cache(cache :: Cachex.t()) :: {:ok, map()} | {:error, atom()} + @spec for_cache(cache :: Cachex.t()) :: map() | {:error, atom()} def for_cache(cache() = cache) do case Hook.locate(cache, __MODULE__) do nil -> @@ -66,7 +66,7 @@ defmodule Cachex.Stats do # # This will just return the internal state to the calling process. def handle_call(:retrieve, _ctx, stats), - do: {:reply, {:ok, stats}, stats} + do: {:reply, stats, stats} @doc false # Registers an action against the stats container. @@ -108,31 +108,31 @@ defmodule Cachex.Stats do # This will increment the hits/misses of the stats container, based on # whether the value pulled back is `nil` or not (as `nil` is treated as # a missing value through Cachex as of v3). - defp register_action(stats, {:get, _args}, {_tag, nil}), + defp register_action(stats, {:get, _args}, nil), do: increment(stats, [:misses], 1) - defp register_action(stats, {:get, _args}, {_tag, _value}), + defp register_action(stats, {:get, _args}, _value), do: increment(stats, [:hits], 1) # Handles registration of `put()` command calls. # # These calls will just increment the `:writes` count of the statistics # container, but only if the write succeeded (as determined by the value). - defp register_action(stats, {:put, _args}, {_tag, true}), + defp register_action(stats, {:put, _args}, :ok), do: increment(stats, [:writes], 1) # Handles registration of `put_many()` command calls. # # This is the same as the `put()` handler except that it will count the # number of pairs being processed when incrementing the `:writes` key. - defp register_action(stats, {:put_many, [pairs | _]}, {_tag, true}), + defp register_action(stats, {:put_many, [pairs | _]}, :ok), do: increment(stats, [:writes], length(pairs)) # Handles registration of `del()` command calls. # # Cache deletions will increment the `:evictions` key count, based on # whether the call succeeded (i.e. the result value is truthy). - defp register_action(stats, {:del, _args}, {_tag, true}), + defp register_action(stats, {:del, _args}, :ok), do: increment(stats, [:evictions], 1) # Handles registration of `purge()` command calls. @@ -140,7 +140,7 @@ defmodule Cachex.Stats do # A purge call will increment the `:evictions` key using the count of # purged keys as the number to increment by. The `:expirations` key # will also be incremented in the same way, to surface TTL deletions. - defp register_action(stats, {:purge, _args}, {_status, count}) do + defp register_action(stats, {:purge, _args}, count) do stats |> increment([:expirations], count) |> increment([:evictions], count) @@ -153,6 +153,9 @@ defmodule Cachex.Stats do defp register_action(stats, {:fetch, _args}, {label, _value}), do: register_fetch(stats, label) + defp register_action(stats, {:fetch, _args}, _value), + do: register_fetch(stats, :ok) + # Handles registration of `incr()` command calls. # # This delegates through to `register_increment/4` as the logic is a @@ -171,24 +174,24 @@ defmodule Cachex.Stats do # # This will increment the `:updates` key if the value signals that the # update was successful, otherwise nothing will be modified. - defp register_action(stats, {:update, _args}, {_tag, true}), + defp register_action(stats, {:update, _args}, true), do: increment(stats, [:updates], 1) - # Handles registration of `clear()` command calls. + # Handles registration of `clear()` and `prune()` command calls. # # This operates in the same way as the `del()` call statistics, except that # a count is received in the result, and is used to increment by instead. - defp register_action(stats, {:clear, _args}, {_tag, count}), + defp register_action(stats, {action, _args}, count) when action in [:clear, :prune], do: increment(stats, [:evictions], count) # Handles registration of `exists?()` command calls. # # The result boolean will determine whether this increments the `:hits` or # `:misses` key of the main statistics container (true/false respectively). - defp register_action(stats, {:exists?, _args}, {_tag, true}), + defp register_action(stats, {:exists?, _args}, true), do: increment(stats, [:hits], 1) - defp register_action(stats, {:exists?, _args}, {_tag, false}), + defp register_action(stats, {:exists?, _args}, false), do: increment(stats, [:misses], 1) # Handles registration of `take()` command calls. @@ -196,7 +199,7 @@ defmodule Cachex.Stats do # Take calls are a little complicated because they need to increment the # global eviction count (due to removal) but also increment the global # hit/miss count, in addition to the status in the `:take` namespace. - defp register_action(stats, {:take, _args}, {_tag, nil}), + defp register_action(stats, {:take, _args}, nil), do: increment(stats, [:misses], 1) defp register_action(stats, {:take, _args}, _result) do @@ -208,16 +211,18 @@ defmodule Cachex.Stats do # Handles registration of `invoke()` command calls. # # This will increment a custom invocations map to track custom command calls. - defp register_action(stats, {:invoke, [cmd | _args]}, {:ok, _value}), + defp register_action(stats, {:invoke, _args}, {:error, :invalid_command}), + do: stats + + defp register_action(stats, {:invoke, [cmd | _args]}, _any), do: increment(stats, [:invocations, cmd], 1) # Handles registration of updating command calls. # # All of the matches calls (dictated by @update_calls) will increment the main # `:updates` key in the statistics map only if the value is received as `true`. - defp register_action(stats, {action, _args}, {_tag, true}) - when action in @update_calls, - do: increment(stats, [:updates], 1) + defp register_action(stats, {action, _args}, true) when action in @update_calls, + do: increment(stats, [:updates], 1) # No-op to avoid crashing on other statistics. defp register_action(stats, _action, _result), @@ -254,7 +259,10 @@ defmodule Cachex.Stats do # basically just a sign flip). It's split out as it's a little more involved # than a basic stat count as we need to reverse the arguments to determine if # there was a new write or an update (based on the initial/amount arguments). - defp register_increment(stats, {_type, args}, {_tag, value}, offset) do + defp register_increment(stats, _call, {:error, _reason}, _offset), + do: stats + + defp register_increment(stats, {_type, args}, value, offset) do amount = Enum.at(args, 1, 1) options = Enum.at(args, 2, []) diff --git a/test/cachex/actions/clear_test.exs b/test/cachex/actions/clear_test.exs index 78fdc656..84f0113d 100644 --- a/test/cachex/actions/clear_test.exs +++ b/test/cachex/actions/clear_test.exs @@ -12,34 +12,26 @@ defmodule Cachex.Actions.ClearTest do cache = TestUtils.create_cache(hooks: [hook]) # fill with some items - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) - {:ok, true} = Cachex.put(cache, 3, 3) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok + assert Cachex.put(cache, 3, 3) == :ok # clear all hook TestUtils.flush() - # clear the cache - result = Cachex.clear(cache) - # 3 items should have been removed - assert(result == {:ok, 3}) + assert Cachex.clear(cache) == 3 # verify the hooks were updated with the clear - assert_receive({{:clear, [[]]}, ^result}) + assert_receive {{:clear, [[]]}, 3} # verify the size call never notified - refute_receive({{:size, [[]]}, ^result}) - - # retrieve all items - value1 = Cachex.get(cache, 1) - value2 = Cachex.get(cache, 2) - value3 = Cachex.get(cache, 3) + refute_receive {{:size, [[]]}, 3} - # verify the items are gone - assert(value1 == {:ok, nil}) - assert(value2 == {:ok, nil}) - assert(value3 == {:ok, nil}) + # retrieve all items, verify the items are gone + assert Cachex.get(cache, 1) == nil + assert Cachex.get(cache, 2) == nil + assert Cachex.get(cache, 3) == nil end # This test verifies that the distributed router correctly controls @@ -53,18 +45,14 @@ defmodule Cachex.Actions.ClearTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # retrieve the cache size, should be 2 - {:ok, 2} = Cachex.size(cache) + assert Cachex.size(cache) == 2 # clear just the local cache to start with - clear1 = Cachex.clear(cache, local: true) - clear2 = Cachex.clear(cache, local: false) - - # check the local removed 1 - assert(clear1 == {:ok, 1}) - assert(clear2 == {:ok, 1}) + assert Cachex.clear(cache, local: true) == 1 + assert Cachex.clear(cache, local: false) == 1 end end diff --git a/test/cachex/actions/decr_test.exs b/test/cachex/actions/decr_test.exs index ac3f9901..c6b8e969 100644 --- a/test/cachex/actions/decr_test.exs +++ b/test/cachex/actions/decr_test.exs @@ -14,32 +14,19 @@ defmodule Cachex.Actions.DecrTest do # define write options opts1 = [default: 10] - # decrement some items - decr1 = Cachex.decr(cache, "key1") - decr2 = Cachex.decr(cache, "key1", 2) - decr3 = Cachex.decr(cache, "key2", 1, opts1) - - # the first result should be -1 - assert(decr1 == {:ok, -1}) - - # the second result should be -3 - assert(decr2 == {:ok, -3}) - - # the third result should be 9 - assert(decr3 == {:ok, 9}) + # decrement some items, verify the values + assert Cachex.decr(cache, "key1") == -1 + assert Cachex.decr(cache, "key1", 2) == -3 + assert Cachex.decr(cache, "key2", 1, opts1) == 9 # verify the hooks were updated with the decrement - assert_receive({{:decr, ["key1", 1, []]}, ^decr1}) - assert_receive({{:decr, ["key1", 2, []]}, ^decr2}) - assert_receive({{:decr, ["key2", 1, ^opts1]}, ^decr3}) + assert_receive {{:decr, ["key1", 1, []]}, -1} + assert_receive {{:decr, ["key1", 2, []]}, -3} + assert_receive {{:decr, ["key2", 1, ^opts1]}, 9} - # retrieve all items - value1 = Cachex.get(cache, "key1") - value2 = Cachex.get(cache, "key2") - - # verify the items match - assert(value1 == {:ok, -3}) - assert(value2 == {:ok, 9}) + # retrieve all items, verify the items match + assert Cachex.get(cache, "key1") == -3 + assert Cachex.get(cache, "key2") == 9 end # This test covers the negative case where a value exists but is not an integer, @@ -50,13 +37,10 @@ defmodule Cachex.Actions.DecrTest do cache = TestUtils.create_cache() # set a non-numeric value - {:ok, true} = Cachex.put(cache, "key", "value") - - # try to increment the value - result = Cachex.decr(cache, "key", 1) + assert Cachex.put(cache, "key", "value") == :ok - # we should receive an error - assert(result == {:error, :non_numeric_value}) + # try to increment the value, we should receive an error + assert Cachex.decr(cache, "key", 1) == {:error, :non_numeric_value} end # This test verifies that this action is correctly distributed across @@ -68,15 +52,11 @@ defmodule Cachex.Actions.DecrTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, -1} = Cachex.decr(cache, 1, 1) - {:ok, -2} = Cachex.decr(cache, 2, 2) + assert Cachex.decr(cache, 1, 1) == -1 + assert Cachex.decr(cache, 2, 2) == -2 # check the results of the calls across nodes - size1 = Cachex.size(cache, local: true) - size2 = Cachex.size(cache, local: false) - - # one local, two total - assert(size1 == {:ok, 1}) - assert(size2 == {:ok, 2}) + assert Cachex.size(cache, local: true) == 1 + assert Cachex.size(cache, local: false) == 2 end end diff --git a/test/cachex/actions/del_test.exs b/test/cachex/actions/del_test.exs index ded05532..4c7c52db 100644 --- a/test/cachex/actions/del_test.exs +++ b/test/cachex/actions/del_test.exs @@ -12,27 +12,19 @@ defmodule Cachex.Actions.DelTest do cache = TestUtils.create_cache(hooks: [hook]) # add some cache entries - {:ok, true} = Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 1, 1) == :ok # delete some entries - result1 = Cachex.del(cache, 1) - result2 = Cachex.del(cache, 2) - - # verify both are true - assert(result1 == {:ok, true}) - assert(result2 == {:ok, true}) + assert Cachex.del(cache, 1) == :ok + assert Cachex.del(cache, 2) == :ok # verify the hooks were updated with the delete - assert_receive({{:del, [1, []]}, ^result1}) - assert_receive({{:del, [2, []]}, ^result2}) - - # retrieve all items - value1 = Cachex.get(cache, 1) - value2 = Cachex.get(cache, 2) + assert_receive {{:del, [1, []]}, :ok} + assert_receive {{:del, [2, []]}, :ok} - # verify the items are gone - assert(value1 == {:ok, nil}) - assert(value2 == {:ok, nil}) + # retrieve all items, verify the items are gone + assert Cachex.get(cache, 1) == nil + assert Cachex.get(cache, 2) == nil end # This test verifies that this action is correctly distributed across @@ -44,27 +36,19 @@ defmodule Cachex.Actions.DelTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # check the results of the calls across nodes - size1 = Cachex.size(cache, local: true) - size2 = Cachex.size(cache, local: false) - - # one local, two total - assert(size1 == {:ok, 1}) - assert(size2 == {:ok, 2}) + assert Cachex.size(cache, local: true) == 1 + assert Cachex.size(cache, local: false) == 2 # delete each item from the cache cluster - {:ok, true} = Cachex.del(cache, 1) - {:ok, true} = Cachex.del(cache, 2) + assert Cachex.del(cache, 1) == :ok + assert Cachex.del(cache, 2) == :ok # check the results of the calls across nodes - size3 = Cachex.size(cache, local: true) - size4 = Cachex.size(cache, local: false) - - # no records are left - assert(size3 == {:ok, 0}) - assert(size4 == {:ok, 0}) + assert Cachex.size(cache, local: true) == 0 + assert Cachex.size(cache, local: false) == 0 end end diff --git a/test/cachex/actions/empty_test.exs b/test/cachex/actions/empty_test.exs index 2016f156..b5911176 100644 --- a/test/cachex/actions/empty_test.exs +++ b/test/cachex/actions/empty_test.exs @@ -13,25 +13,19 @@ defmodule Cachex.Actions.EmptyTest do cache = TestUtils.create_cache(hooks: [hook]) # check if the cache is empty - result1 = Cachex.empty?(cache) - - # it should be - assert(result1 == {:ok, true}) + assert Cachex.empty?(cache) # verify the hooks were updated with the message - assert_receive({{:empty?, [[]]}, ^result1}) + assert_receive {{:empty?, [[]]}, true} # add some cache entries - {:ok, true} = Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 1, 1) == :ok # check if the cache is empty - result2 = Cachex.empty?(cache) - - # it shouldn't be - assert(result2 == {:ok, false}) + refute Cachex.empty?(cache) # verify the hooks were updated with the message - assert_receive({{:empty?, [[]]}, ^result2}) + assert_receive {{:empty?, [[]]}, false} end # This test verifies that the distributed router correctly controls @@ -45,37 +39,25 @@ defmodule Cachex.Actions.EmptyTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # check if the cache is empty, locally and remote - empty1 = Cachex.empty?(cache, local: true) - empty2 = Cachex.empty?(cache, local: false) - - # both should be non-empty - assert(empty1 == {:ok, false}) - assert(empty2 == {:ok, false}) + refute Cachex.empty?(cache, local: true) + refute Cachex.empty?(cache, local: false) # delete the key on the local node - {:ok, 1} = Cachex.clear(cache, local: true) + assert Cachex.clear(cache, local: true) == 1 # check again as to whether the cache is empty - empty3 = Cachex.empty?(cache, local: true) - empty4 = Cachex.empty?(cache, local: false) - - # only the local node is now empty - assert(empty3 == {:ok, true}) - assert(empty4 == {:ok, false}) + assert Cachex.empty?(cache, local: true) + refute Cachex.empty?(cache, local: false) # finally delete all keys in the cluster - {:ok, 1} = Cachex.clear(cache, local: false) + assert Cachex.clear(cache, local: false) == 1 # check again as to whether the cache is empty - empty5 = Cachex.empty?(cache, local: true) - empty6 = Cachex.empty?(cache, local: false) - - # both should now show empty - assert(empty5 == {:ok, true}) - assert(empty6 == {:ok, true}) + assert Cachex.empty?(cache, local: true) + assert Cachex.empty?(cache, local: false) end end diff --git a/test/cachex/actions/execute_test.exs b/test/cachex/actions/execute_test.exs index 7e929260..df720e03 100644 --- a/test/cachex/actions/execute_test.exs +++ b/test/cachex/actions/execute_test.exs @@ -12,13 +12,13 @@ defmodule Cachex.Actions.ExecuteTest do result = Cachex.execute(cache, fn cache -> [ - Cachex.put!(cache, 1, 1), - Cachex.put!(cache, 2, 2), - Cachex.put!(cache, 3, 3) + Cachex.put(cache, 1, 1), + Cachex.put(cache, 2, 2), + Cachex.put(cache, 3, 3) ] end) # verify the block returns correct values - assert(result == {:ok, [true, true, true]}) + assert result == [:ok, :ok, :ok] end end diff --git a/test/cachex/actions/exists_test.exs b/test/cachex/actions/exists_test.exs index 614dcae6..52f6244b 100644 --- a/test/cachex/actions/exists_test.exs +++ b/test/cachex/actions/exists_test.exs @@ -12,8 +12,8 @@ defmodule Cachex.Actions.ExistsTest do cache = TestUtils.create_cache(hooks: [hook]) # add some keys to the cache - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1) == :ok # let TTLs clear :timer.sleep(2) @@ -22,34 +22,22 @@ defmodule Cachex.Actions.ExistsTest do TestUtils.flush() # check if several keys exist - exists1 = Cachex.exists?(cache, 1) - exists2 = Cachex.exists?(cache, 2) - exists3 = Cachex.exists?(cache, 3) - - # the first result should exist - assert(exists1 == {:ok, true}) - - # the next two should be missing - assert(exists2 == {:ok, false}) - assert(exists3 == {:ok, false}) + assert Cachex.exists?(cache, 1) + refute Cachex.exists?(cache, 2) + refute Cachex.exists?(cache, 3) # verify the hooks were updated with the message - assert_receive({{:exists?, [1, []]}, ^exists1}) - assert_receive({{:exists?, [2, []]}, ^exists2}) - assert_receive({{:exists?, [3, []]}, ^exists3}) + assert_receive {{:exists?, [1, []]}, true} + assert_receive {{:exists?, [2, []]}, false} + assert_receive {{:exists?, [3, []]}, false} # check we received valid purge actions for the TTL - assert_receive({{:purge, [[]]}, {:ok, 1}}) + assert_receive {{:purge, [[]]}, 1} # retrieve all values from the cache - value1 = Cachex.get(cache, 1) - value2 = Cachex.get(cache, 2) - value3 = Cachex.get(cache, 3) - - # verify the second was removed - assert(value1 == {:ok, 1}) - assert(value2 == {:ok, nil}) - assert(value3 == {:ok, nil}) + assert Cachex.get(cache, 1) == 1 + assert Cachex.get(cache, 2) == nil + assert Cachex.get(cache, 3) == nil end # This test verifies that this action is correctly distributed across @@ -61,15 +49,11 @@ defmodule Cachex.Actions.ExistsTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # check the results of the calls across nodes - exists1 = Cachex.exists?(cache, 1) - exists2 = Cachex.exists?(cache, 2) - - # both exist in the cluster - assert(exists1 == {:ok, true}) - assert(exists2 == {:ok, true}) + assert Cachex.exists?(cache, 1) + assert Cachex.exists?(cache, 2) end end diff --git a/test/cachex/actions/expire_at_test.exs b/test/cachex/actions/expire_at_test.exs index f7ee76d7..385f9f17 100644 --- a/test/cachex/actions/expire_at_test.exs +++ b/test/cachex/actions/expire_at_test.exs @@ -13,9 +13,9 @@ defmodule Cachex.Actions.ExpireAtTest do cache = TestUtils.create_cache(hooks: [hook]) # add some keys to the cache - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 10) - {:ok, true} = Cachex.put(cache, 3, 3, expire: 10) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 10) == :ok + assert Cachex.put(cache, 3, 3, expire: 10) == :ok # clear messages TestUtils.flush() @@ -28,43 +28,30 @@ defmodule Cachex.Actions.ExpireAtTest do p_expire_time = ctime - 10000 # expire several keys - result1 = Cachex.expire_at(cache, 1, f_expire_time) - result2 = Cachex.expire_at(cache, 2, f_expire_time) - result3 = Cachex.expire_at(cache, 3, p_expire_time) - result4 = Cachex.expire_at(cache, 4, f_expire_time) - - # the first two should succeed - assert(result1 == {:ok, true}) - assert(result2 == {:ok, true}) - - # the third should succeed and remove the key - assert(result3 == {:ok, true}) - - # the last one is missing and should fail - assert(result4 == {:ok, false}) + assert Cachex.expire_at(cache, 1, f_expire_time) + assert Cachex.expire_at(cache, 2, f_expire_time) + assert Cachex.expire_at(cache, 3, p_expire_time) + refute Cachex.expire_at(cache, 4, f_expire_time) # verify the hooks were updated with the message - assert_receive({{:expire_at, [1, ^f_expire_time, []]}, ^result1}) - assert_receive({{:expire_at, [2, ^f_expire_time, []]}, ^result2}) - assert_receive({{:expire_at, [3, ^p_expire_time, []]}, ^result3}) - assert_receive({{:expire_at, [4, ^f_expire_time, []]}, ^result4}) + assert_receive {{:expire_at, [1, ^f_expire_time, []]}, true} + assert_receive {{:expire_at, [2, ^f_expire_time, []]}, true} + assert_receive {{:expire_at, [3, ^p_expire_time, []]}, true} + assert_receive {{:expire_at, [4, ^f_expire_time, []]}, false} - # check we received valid purge actions for the removed key - assert_receive({{:purge, [[]]}, {:ok, 1}}) + # purge expired records + assert Cachex.purge(cache) - # retrieve all TTLs from the cache - ttl1 = Cachex.ttl!(cache, 1) - ttl2 = Cachex.ttl!(cache, 2) - ttl3 = Cachex.ttl(cache, 3) - ttl4 = Cachex.ttl(cache, 4) + # check we received valid purge actions for the removed key + assert_receive {{:purge, [[]]}, 1} # verify the new TTL has taken effect - assert_in_delta(ttl1, 10000, 25) - assert_in_delta(ttl2, 10000, 25) + assert_in_delta Cachex.ttl(cache, 1), 10000, 25 + assert_in_delta Cachex.ttl(cache, 2), 10000, 25 # assert the last two keys don't exist - assert(ttl3 == {:ok, nil}) - assert(ttl4 == {:ok, nil}) + assert Cachex.ttl(cache, 3) == nil + assert Cachex.ttl(cache, 4) == nil end # This test verifies that this action is correctly distributed across @@ -76,19 +63,15 @@ defmodule Cachex.Actions.ExpireAtTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # set expirations on both keys - {:ok, true} = Cachex.expire_at(cache, 1, now() + 5000) - {:ok, true} = Cachex.expire_at(cache, 2, now() + 5000) + assert Cachex.expire_at(cache, 1, now() + 5000) + assert Cachex.expire_at(cache, 2, now() + 5000) # check the expiration of each key in the cluster - {:ok, expiration1} = Cachex.ttl(cache, 1) - {:ok, expiration2} = Cachex.ttl(cache, 2) - - # both have an expiration - assert(expiration1 != nil) - assert(expiration2 != nil) + assert Cachex.ttl(cache, 1) != nil + assert Cachex.ttl(cache, 2) != nil end end diff --git a/test/cachex/actions/expire_test.exs b/test/cachex/actions/expire_test.exs index 8d91bedf..221e43a1 100644 --- a/test/cachex/actions/expire_test.exs +++ b/test/cachex/actions/expire_test.exs @@ -13,9 +13,9 @@ defmodule Cachex.Actions.ExpireTest do cache = TestUtils.create_cache(hooks: [hook]) # add some keys to the cache - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 10) - {:ok, true} = Cachex.put(cache, 3, 3, expire: 10) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 10) == :ok + assert Cachex.put(cache, 3, 3, expire: 10) == :ok # clear messages TestUtils.flush() @@ -25,43 +25,30 @@ defmodule Cachex.Actions.ExpireTest do p_expire_time = -10000 # expire several keys - result1 = Cachex.expire(cache, 1, f_expire_time) - result2 = Cachex.expire(cache, 2, f_expire_time) - result3 = Cachex.expire(cache, 3, p_expire_time) - result4 = Cachex.expire(cache, 4, f_expire_time) - - # the first two should succeed - assert(result1 == {:ok, true}) - assert(result2 == {:ok, true}) - - # the third should succeed and remove the key - assert(result3 == {:ok, true}) - - # the last one is missing and should fail - assert(result4 == {:ok, false}) + assert Cachex.expire(cache, 1, f_expire_time) + assert Cachex.expire(cache, 2, f_expire_time) + assert Cachex.expire(cache, 3, p_expire_time) + refute Cachex.expire(cache, 4, f_expire_time) # verify the hooks were updated with the message - assert_receive({{:expire, [1, ^f_expire_time, []]}, ^result1}) - assert_receive({{:expire, [2, ^f_expire_time, []]}, ^result2}) - assert_receive({{:expire, [3, ^p_expire_time, []]}, ^result3}) - assert_receive({{:expire, [4, ^f_expire_time, []]}, ^result4}) + assert_receive {{:expire, [1, ^f_expire_time, []]}, true} + assert_receive {{:expire, [2, ^f_expire_time, []]}, true} + assert_receive {{:expire, [3, ^p_expire_time, []]}, true} + assert_receive {{:expire, [4, ^f_expire_time, []]}, false} - # check we received valid purge actions for the removed key - assert_receive({{:purge, [[]]}, {:ok, 1}}) + # purge expired records + assert Cachex.purge(cache) - # retrieve all TTLs from the cache - ttl1 = Cachex.ttl!(cache, 1) - ttl2 = Cachex.ttl!(cache, 2) - ttl3 = Cachex.ttl(cache, 3) - ttl4 = Cachex.ttl(cache, 4) + # check we received valid purge actions for the removed key + assert_receive {{:purge, [[]]}, 1} # verify the new TTL has taken effect - assert_in_delta(ttl1, 10000, 25) - assert_in_delta(ttl2, 10000, 25) + assert_in_delta Cachex.ttl(cache, 1), 10000, 25 + assert_in_delta Cachex.ttl(cache, 2), 10000, 25 # assert the last two keys don't exist - assert(ttl3 == {:ok, nil}) - assert(ttl4 == {:ok, nil}) + assert Cachex.ttl(cache, 3) == nil + assert Cachex.ttl(cache, 4) == nil end # This test verifies that this action is correctly distributed across @@ -73,19 +60,15 @@ defmodule Cachex.Actions.ExpireTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # set expirations on both keys - {:ok, true} = Cachex.expire(cache, 1, 5000) - {:ok, true} = Cachex.expire(cache, 2, 5000) + assert Cachex.expire(cache, 1, 5000) + assert Cachex.expire(cache, 2, 5000) # check the expiration of each key in the cluster - {:ok, expiration1} = Cachex.ttl(cache, 1) - {:ok, expiration2} = Cachex.ttl(cache, 2) - - # both have an expiration - assert(expiration1 != nil) - assert(expiration2 != nil) + assert Cachex.ttl(cache, 1) != nil + assert Cachex.ttl(cache, 2) != nil end end diff --git a/test/cachex/actions/export_test.exs b/test/cachex/actions/export_test.exs index 093b8ccd..9fd5388d 100644 --- a/test/cachex/actions/export_test.exs +++ b/test/cachex/actions/export_test.exs @@ -7,12 +7,12 @@ defmodule Cachex.Actions.ExportTest do cache = TestUtils.create_cache() # fill with some items - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) - {:ok, true} = Cachex.put(cache, 3, 3) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok + assert Cachex.put(cache, 3, 3) == :ok # export the items - {:ok, export} = Cachex.export(cache) + export = Cachex.export(cache) # check the exported count assert length(export) == 3 @@ -29,37 +29,37 @@ defmodule Cachex.Actions.ExportTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # retrieve the keys from both local & remote - {:ok, export1} = Cachex.export(cache, local: true) - {:ok, export2} = Cachex.export(cache, local: false) + export1 = Cachex.export(cache, local: true) + export2 = Cachex.export(cache, local: false) # local just one, cluster has two - assert(length(export1) == 1) - assert(length(export2) == 2) + assert length(export1) == 1 + assert length(export2) == 2 # delete the single local key - {:ok, 1} = Cachex.clear(cache, local: true) + assert Cachex.clear(cache, local: true) == 1 # retrieve the keys again from both local & remote - {:ok, export3} = Cachex.export(cache, local: true) - {:ok, export4} = Cachex.export(cache, local: false) + export3 = Cachex.export(cache, local: true) + export4 = Cachex.export(cache, local: false) # now local has no keys - assert(length(export3) == 0) - assert(length(export4) == 1) + assert length(export3) == 0 + assert length(export4) == 1 # delete the remaining key inside the cluster - {:ok, 1} = Cachex.clear(cache, local: false) + assert Cachex.clear(cache, local: false) == 1 # retrieve the keys again from both local & remote - {:ok, export5} = Cachex.keys(cache, local: true) - {:ok, export6} = Cachex.keys(cache, local: false) + export5 = Cachex.keys(cache, local: true) + export6 = Cachex.keys(cache, local: false) # now both don't have any keys - assert(length(export5) == 0) - assert(length(export6) == 0) + assert length(export5) == 0 + assert length(export6) == 0 end end diff --git a/test/cachex/actions/fetch_test.exs b/test/cachex/actions/fetch_test.exs index f75a390d..6e573de2 100644 --- a/test/cachex/actions/fetch_test.exs +++ b/test/cachex/actions/fetch_test.exs @@ -13,8 +13,8 @@ defmodule Cachex.Actions.FetchTest do cache = TestUtils.create_cache(hooks: [hook]) # set some keys in the cache - {:ok, true} = Cachex.put(cache, "key1", 1) - {:ok, true} = Cachex.put(cache, "key2", 2, expire: 1) + assert Cachex.put(cache, "key1", 1) == :ok + assert Cachex.put(cache, "key2", 2, expire: 1) == :ok # wait for the TTL to pass :timer.sleep(2) @@ -29,49 +29,30 @@ defmodule Cachex.Actions.FetchTest do fb_opt4 = fn -> "6yek" end # fetch the first and second keys - result1 = Cachex.fetch(cache, "key1", fb_opt1) - result2 = Cachex.fetch(cache, "key2", fb_opt1) - - # verify fetching an existing key - assert(result1 == {:ok, 1}) - - # verify the ttl expiration - assert(result2 == {:commit, "2yek"}) + assert Cachex.fetch(cache, "key1", fb_opt1) == 1 + assert Cachex.fetch(cache, "key2", fb_opt1) == {:commit, "2yek"} # fetch keys with a provided fallback - result3 = Cachex.fetch(cache, "key3", fb_opt1) - result4 = Cachex.fetch(cache, "key4", fb_opt2) - result5 = Cachex.fetch(cache, "key5", fb_opt3) - result6 = Cachex.fetch(cache, "key6", fb_opt4) - - # verify the fallback fetches - assert(result3 == {:commit, "3yek"}) - assert(result4 == {:commit, "4yek"}) - assert(result5 == {:ignore, "5yek"}) - assert(result6 == {:commit, "6yek"}) + assert Cachex.fetch(cache, "key3", fb_opt1) == {:commit, "3yek"} + assert Cachex.fetch(cache, "key4", fb_opt2) == {:commit, "4yek"} + assert Cachex.fetch(cache, "key5", fb_opt3) == {:ignore, "5yek"} + assert Cachex.fetch(cache, "key6", fb_opt4) == {:commit, "6yek"} # assert we receive valid notifications - assert_receive({{:fetch, ["key1", ^fb_opt1, []]}, ^result1}) - assert_receive({{:fetch, ["key2", ^fb_opt1, []]}, ^result2}) - assert_receive({{:fetch, ["key3", ^fb_opt1, []]}, ^result3}) - assert_receive({{:fetch, ["key4", ^fb_opt2, []]}, ^result4}) - assert_receive({{:fetch, ["key5", ^fb_opt3, []]}, ^result5}) - assert_receive({{:fetch, ["key6", ^fb_opt4, []]}, ^result6}) + assert_receive {{:fetch, ["key1", ^fb_opt1, []]}, 1} + assert_receive {{:fetch, ["key2", ^fb_opt1, []]}, {:commit, "2yek"}} + assert_receive {{:fetch, ["key3", ^fb_opt1, []]}, {:commit, "3yek"}} + assert_receive {{:fetch, ["key4", ^fb_opt2, []]}, {:commit, "4yek"}} + assert_receive {{:fetch, ["key5", ^fb_opt3, []]}, {:ignore, "5yek"}} + assert_receive {{:fetch, ["key6", ^fb_opt4, []]}, {:commit, "6yek"}} # check we received valid purge actions for the TTL - assert_receive({{:purge, [[]]}, {:ok, 1}}) + assert_receive {{:purge, [[]]}, 1} # retrieve the loaded keys - value1 = Cachex.get(cache, "key3") - value2 = Cachex.get(cache, "key4") - value3 = Cachex.get(cache, "key5") - - # committed keys should now exist - assert(value1 == {:ok, "3yek"}) - assert(value2 == {:ok, "4yek"}) - - # ignored keys should not exist - assert(value3 == {:ok, nil}) + assert Cachex.get(cache, "key3") == "3yek" + assert Cachex.get(cache, "key4") == "4yek" + assert Cachex.get(cache, "key5") == nil end # This test ensures that the fallback is executed just once when a @@ -83,14 +64,14 @@ defmodule Cachex.Actions.FetchTest do # basic fallback fallback1 = fn -> - Cachex.incr!(cache, "key1_count") + Cachex.incr(cache, "key1_count") {:commit, "val"} end # secondary fallback fallback2 = fn -> # incr! exists to match the fallback1 exec time - Cachex.incr!(cache, "key2_count") + Cachex.incr(cache, "key2_count") Cachex.fetch(cache, "key1", fallback1) end @@ -110,7 +91,7 @@ defmodule Cachex.Actions.FetchTest do Task.await(task2) # check the fallback was only executed a single time - assert Cachex.get(cache, "key1_count") == {:ok, 1} + assert Cachex.get(cache, "key1_count") == 1 end end @@ -123,16 +104,10 @@ defmodule Cachex.Actions.FetchTest do fb_opt = &{:commit, String.reverse(&1), purged} # fetch our key using our fallback - result = Cachex.fetch(cache, "key", fb_opt) - - # verify fetching an existing key - assert(result == {:commit, "yek"}) - - # fetch back the expiration of the key - expiration = Cachex.ttl!(cache, "key") + assert Cachex.fetch(cache, "key", fb_opt) == {:commit, "yek"} # check we have a set expiration - assert_in_delta(expiration, 60000, 250) + assert_in_delta Cachex.ttl(cache, "key"), 60000, 250 end # This test verifies that this action is correctly distributed across @@ -145,16 +120,12 @@ defmodule Cachex.Actions.FetchTest do # we know that 1 & 2 hash to different nodes - have to make sure that we # use a known function, otherwise it fails with an undefined function. - {:commit, "1"} = Cachex.fetch(cache, 1, &Integer.to_string/1) - {:commit, "2"} = Cachex.fetch(cache, 2, &Integer.to_string/1) + assert Cachex.fetch(cache, 1, &Integer.to_string/1) == {:commit, "1"} + assert Cachex.fetch(cache, 2, &Integer.to_string/1) == {:commit, "2"} # try to retrieve both of the set keys - get1 = Cachex.get(cache, 1) - get2 = Cachex.get(cache, 2) - - # both should come back - assert(get1 == {:ok, "1"}) - assert(get2 == {:ok, "2"}) + assert Cachex.get(cache, 1) == "1" + assert Cachex.get(cache, 2) == "2" end # This test ensures that the fallback is executed just once per key, per TTL, @@ -208,7 +179,7 @@ defmodule Cachex.Actions.FetchTest do test "fetching functions have access to $callers" do # create a test cache cache = TestUtils.create_cache() - cache = Services.Overseer.get(cache) + cache = Services.Overseer.lookup(cache) # process chain parent = self() @@ -221,6 +192,6 @@ defmodule Cachex.Actions.FetchTest do end) # check callers are the Courier and us - assert_receive([^courier, ^parent]) + assert_receive [^courier, ^parent] end end diff --git a/test/cachex/actions/get_and_update_test.exs b/test/cachex/actions/get_and_update_test.exs index f8d76b03..fb2a8a3f 100644 --- a/test/cachex/actions/get_and_update_test.exs +++ b/test/cachex/actions/get_and_update_test.exs @@ -12,11 +12,11 @@ defmodule Cachex.Actions.GetAndUpdateTest do cache = TestUtils.create_cache(hooks: [hook]) # set some keys in the cache - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1) - {:ok, true} = Cachex.put(cache, 4, 4, expire: 1000) - {:ok, true} = Cachex.put(cache, 5, 5) - {:ok, true} = Cachex.put(cache, 6, 6) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1) == :ok + assert Cachex.put(cache, 4, 4, expire: 1000) == :ok + assert Cachex.put(cache, 5, 5) == :ok + assert Cachex.put(cache, 6, 6) == :ok # wait for the TTL to pass :timer.sleep(25) @@ -47,53 +47,40 @@ defmodule Cachex.Actions.GetAndUpdateTest do end) # verify the first key is retrieved - assert(result1 == {:commit, "1"}) + assert result1 == {:commit, "1"} # verify the second and third keys are missing - assert(result2 == {:commit, ""}) - assert(result3 == {:commit, ""}) + assert result2 == {:commit, ""} + assert result3 == {:commit, ""} # verify the fourth result - assert(result4 == {:commit, "4"}) + assert result4 == {:commit, "4"} # verify the fifth and sixth results - assert(result5 == {:ignore, "5"}) - assert(result6 == {:commit, "6"}) + assert result5 == {:ignore, "5"} + assert result6 == {:commit, "6"} # assert we receive valid notifications - assert_receive({{:get_and_update, [1, _to_string, []]}, ^result1}) - assert_receive({{:get_and_update, [2, _to_string, []]}, ^result2}) - assert_receive({{:get_and_update, [3, _to_string, []]}, ^result3}) - assert_receive({{:get_and_update, [4, _to_string, []]}, ^result4}) - assert_receive({{:get_and_update, [5, _my_functs, []]}, ^result5}) - assert_receive({{:get_and_update, [6, _my_functs, []]}, ^result6}) + assert_receive {{:get_and_update, [1, _to_string, nil, []]}, ^result1} + assert_receive {{:get_and_update, [2, _to_string, nil, []]}, ^result2} + assert_receive {{:get_and_update, [3, _to_string, nil, []]}, ^result3} + assert_receive {{:get_and_update, [4, _to_string, nil, []]}, ^result4} + assert_receive {{:get_and_update, [5, _my_functs, nil, []]}, ^result5} + assert_receive {{:get_and_update, [6, _my_functs, nil, []]}, ^result6} # check we received valid purge actions for the TTL - assert_receive({{:purge, [[]]}, {:ok, 1}}) + assert_receive {{:purge, [[]]}, 1} # retrieve all entries from the cache - value1 = Cachex.get(cache, 1) - value2 = Cachex.get(cache, 2) - value3 = Cachex.get(cache, 3) - value4 = Cachex.get(cache, 4) - value5 = Cachex.get(cache, 5) - value6 = Cachex.get(cache, 6) - - # all should now have values - assert(value1 == {:ok, "1"}) - assert(value2 == {:ok, ""}) - assert(value3 == {:ok, ""}) - assert(value4 == {:ok, "4"}) - - # verify the commit tags - assert(value5 == {:ok, 5}) - assert(value6 == {:ok, "6"}) - - # check the TTL on the last key - ttl1 = Cachex.ttl!(cache, 4) + assert Cachex.get(cache, 1) == "1" + assert Cachex.get(cache, 2) == "" + assert Cachex.get(cache, 3) == "" + assert Cachex.get(cache, 4) == "4" + assert Cachex.get(cache, 5) == 5 + assert Cachex.get(cache, 6) == "6" # TTL should be maintained - assert_in_delta(ttl1, 965, 11) + assert_in_delta Cachex.ttl(cache, 4), 965, 11 end # This test verifies that this action is correctly distributed across @@ -105,20 +92,16 @@ defmodule Cachex.Actions.GetAndUpdateTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # update both keys with a known function name - {:commit, "1"} = Cachex.get_and_update(cache, 1, &Integer.to_string/1) - {:commit, "2"} = Cachex.get_and_update(cache, 2, &Integer.to_string/1) + assert Cachex.get_and_update(cache, 1, &Integer.to_string/1) == {:commit, "1"} + assert Cachex.get_and_update(cache, 2, &Integer.to_string/1) == {:commit, "2"} # try to retrieve both of the set keys - get1 = Cachex.get(cache, 1) - get2 = Cachex.get(cache, 2) - - # both should come back - assert(get1 == {:ok, "1"}) - assert(get2 == {:ok, "2"}) + assert Cachex.get(cache, 1) == "1" + assert Cachex.get(cache, 2) == "2" end test "fallback function has test process in $callers" do @@ -133,9 +116,9 @@ defmodule Cachex.Actions.GetAndUpdateTest do {:commit, "value"} end) - assert(result == {:commit, "value"}) + assert result == {:commit, "value"} - assert_receive({^callers_reference, callers}) + assert_receive {^callers_reference, callers} assert test_process in callers end @@ -144,7 +127,7 @@ defmodule Cachex.Actions.GetAndUpdateTest do test "update functions have access to $callers" do # create a test cache cache = TestUtils.create_cache() - cache = Services.Overseer.get(cache) + cache = Services.Overseer.lookup(cache) # process chain parent = self() @@ -156,6 +139,6 @@ defmodule Cachex.Actions.GetAndUpdateTest do end) # check callers are just the base process - assert_receive([^parent]) + assert_receive [^parent] end end diff --git a/test/cachex/actions/get_test.exs b/test/cachex/actions/get_test.exs index f83252cf..86213c3f 100644 --- a/test/cachex/actions/get_test.exs +++ b/test/cachex/actions/get_test.exs @@ -13,8 +13,8 @@ defmodule Cachex.Actions.GetTest do cache1 = TestUtils.create_cache(hooks: [hook]) # set some keys in the cache - {:ok, true} = Cachex.put(cache1, 1, 1) - {:ok, true} = Cachex.put(cache1, 2, 2, expire: 1) + assert Cachex.put(cache1, 1, 1) == :ok + assert Cachex.put(cache1, 2, 2, expire: 1) == :ok # wait for the TTL to pass :timer.sleep(2) @@ -22,27 +22,20 @@ defmodule Cachex.Actions.GetTest do # flush all existing messages TestUtils.flush() - # take the first and second key - result1 = Cachex.get(cache1, 1) - result2 = Cachex.get(cache1, 2) - - # take a missing key with no fallback - result3 = Cachex.get(cache1, 3) - # verify the first key is retrieved - assert(result1 == {:ok, 1}) + assert Cachex.get(cache1, 1) == 1 # verify the second and third keys are missing - assert(result2 == {:ok, nil}) - assert(result3 == {:ok, nil}) + assert Cachex.get(cache1, 2) == nil + assert Cachex.get(cache1, 3) == nil # assert we receive valid notifications - assert_receive({{:get, [1, []]}, ^result1}) - assert_receive({{:get, [2, []]}, ^result2}) - assert_receive({{:get, [3, []]}, ^result3}) + assert_receive {{:get, [1, nil, []]}, 1} + assert_receive {{:get, [2, nil, []]}, nil} + assert_receive {{:get, [3, nil, []]}, nil} # check we received valid purge actions for the TTL - assert_receive({{:purge, [[]]}, {:ok, 1}}) + assert_receive {{:purge, [[]]}, 1} end # This test verifies that this action is correctly distributed across @@ -54,15 +47,11 @@ defmodule Cachex.Actions.GetTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # try to retrieve both of the set keys - get1 = Cachex.get(cache, 1) - get2 = Cachex.get(cache, 2) - - # both should come back - assert(get1 == {:ok, 1}) - assert(get2 == {:ok, 2}) + assert Cachex.get(cache, 1) == 1 + assert Cachex.get(cache, 2) == 2 end end diff --git a/test/cachex/actions/import_test.exs b/test/cachex/actions/import_test.exs index 7f20a28a..6cefdeb5 100644 --- a/test/cachex/actions/import_test.exs +++ b/test/cachex/actions/import_test.exs @@ -8,32 +8,25 @@ defmodule Cachex.Actions.ImportTest do start = now() # add some cache entries - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1) - {:ok, true} = Cachex.put(cache, 3, 3, expire: 10_000) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1) == :ok + assert Cachex.put(cache, 3, 3, expire: 10_000) == :ok # export the cache to a list result1 = Cachex.export(cache) - result2 = Cachex.clear(cache) - result3 = Cachex.size(cache) # verify the clearance - assert(result2 == {:ok, 3}) - assert(result3 == {:ok, 0}) + assert Cachex.clear(cache) == 3 + assert Cachex.size(cache) == 0 # wait a while before re-load :timer.sleep(50) # load the cache from the export - result4 = Cachex.import(cache, elem(result1, 1)) - result5 = Cachex.size(cache) - result6 = Cachex.ttl!(cache, 3) - - # verify that the import was ok - assert(result4 == {:ok, 2}) - assert(result5 == {:ok, 2}) + assert Cachex.import(cache, result1) == 2 + assert Cachex.size(cache) == 2 # verify TTL offsetting happens - assert_in_delta(result6, 10_000 - (now() - start), 5) + assert_in_delta Cachex.ttl(cache, 3), 10_000 - (now() - start), 5 end end diff --git a/test/cachex/actions/incr_test.exs b/test/cachex/actions/incr_test.exs index 5762a7fd..c172fda1 100644 --- a/test/cachex/actions/incr_test.exs +++ b/test/cachex/actions/incr_test.exs @@ -14,32 +14,19 @@ defmodule Cachex.Actions.IncrTest do # define write options opts1 = [default: 10] - # increment some items - incr1 = Cachex.incr(cache, "key1") - incr2 = Cachex.incr(cache, "key1", 2) - incr3 = Cachex.incr(cache, "key2", 1, opts1) - - # the first result should be 1 - assert(incr1 == {:ok, 1}) - - # the second result should be 3 - assert(incr2 == {:ok, 3}) - - # the third result should be 11 - assert(incr3 == {:ok, 11}) + # increment some items, verify the values + assert Cachex.incr(cache, "key1") == 1 + assert Cachex.incr(cache, "key1", 2) == 3 + assert Cachex.incr(cache, "key2", 1, opts1) == 11 # verify the hooks were updated with the increment - assert_receive({{:incr, ["key1", 1, []]}, ^incr1}) - assert_receive({{:incr, ["key1", 2, []]}, ^incr2}) - assert_receive({{:incr, ["key2", 1, ^opts1]}, ^incr3}) + assert_receive {{:incr, ["key1", 1, []]}, 1} + assert_receive {{:incr, ["key1", 2, []]}, 3} + assert_receive {{:incr, ["key2", 1, ^opts1]}, 11} - # retrieve all items - value1 = Cachex.get(cache, "key1") - value2 = Cachex.get(cache, "key2") - - # verify the items match - assert(value1 == {:ok, 3}) - assert(value2 == {:ok, 11}) + # retrieve all items, verify the items match + assert Cachex.get(cache, "key1") == 3 + assert Cachex.get(cache, "key2") == 11 end # This test covers the negative case where a value exists but is not an integer, @@ -50,13 +37,10 @@ defmodule Cachex.Actions.IncrTest do cache = TestUtils.create_cache() # set a non-numeric value - {:ok, true} = Cachex.put(cache, "key", "value") - - # try to increment the value - result = Cachex.incr(cache, "key") + assert Cachex.put(cache, "key", "value") == :ok - # we should receive an error - assert(result == {:error, :non_numeric_value}) + # try to increment the value, we should receive an error + assert Cachex.incr(cache, "key") == {:error, :non_numeric_value} end # This test verifies that this action is correctly distributed across @@ -68,15 +52,11 @@ defmodule Cachex.Actions.IncrTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, 1} = Cachex.incr(cache, 1, 1) - {:ok, 2} = Cachex.incr(cache, 2, 2) + assert Cachex.incr(cache, 1, 1) == 1 + assert Cachex.incr(cache, 2, 2) == 2 # check the results of the calls across nodes - size1 = Cachex.size(cache, local: true) - size2 = Cachex.size(cache, local: false) - - # one local, two total - assert(size1 == {:ok, 1}) - assert(size2 == {:ok, 2}) + assert Cachex.size(cache, local: true) == 1 + assert Cachex.size(cache, local: false) == 2 end end diff --git a/test/cachex/actions/inspect_test.exs b/test/cachex/actions/inspect_test.exs index bb0a8350..2feff605 100644 --- a/test/cachex/actions/inspect_test.exs +++ b/test/cachex/actions/inspect_test.exs @@ -10,32 +10,23 @@ defmodule Cachex.Actions.InspectTest do # set several values in the cache for x <- 1..3 do - {:ok, true} = Cachex.put(cache, "key#{x}", "value#{x}", expire: 1) + assert Cachex.put(cache, "key#{x}", "value#{x}", expire: 1) == :ok end # make sure they expire :timer.sleep(2) - # check both the expired count and the keyset - expired1 = Cachex.inspect(cache, {:expired, :count}) - expired2 = Cachex.inspect(cache, {:expired, :keys}) - # the first should contain the count of expired keys - assert(expired1 == {:ok, 3}) - - # break down the expired2 value - {:ok, keys} = expired2 + assert Cachex.inspect(cache, {:expired, :count}) == 3 - # so we just check if they're in the list - assert("key1" in keys) - assert("key2" in keys) - assert("key3" in keys) + # break down the expired key set + expired = + cache + |> Cachex.inspect({:expired, :keys}) + |> Enum.sort() - # grab the length of the expired keys - length1 = length(keys) - - # finally we make sure there are no bonus keys - assert(length1 == 3) + # verify all expired keys + assert expired == ["key1", "key2", "key3"] end # This test ensures that we can see the results of the last time a Janitor @@ -51,20 +42,16 @@ defmodule Cachex.Actions.InspectTest do # let the janitor run :timer.sleep(2) - # retrieve Janitor metadata for both states - result1 = Cachex.inspect(cache1, {:janitor, :last}) - result2 = Cachex.inspect(cache2, {:janitor, :last}) - - # the first cache should have an error - assert(result1 == {:error, :janitor_disabled}) + # the first cache should have an error because janitor has been disabled + assert Cachex.inspect(cache1, {:janitor, :last}) == {:error, :janitor_disabled} - # break down the second result - {:ok, meta} = result2 + # fetch the second cache to verify the metadata + result = Cachex.inspect(cache2, {:janitor, :last}) # check the metadata matches the patterns - assert(is_integer(meta.count)) - assert(is_integer(meta.duration)) - assert(is_integer(meta.started)) + assert is_integer(result.count) + assert is_integer(result.duration) + assert is_integer(result.started) end # This test verifies that we can return stats about the memory being used by a @@ -75,15 +62,15 @@ defmodule Cachex.Actions.InspectTest do cache = TestUtils.create_cache() # retrieve the memory usage - {:ok, result1} = Cachex.inspect(cache, {:memory, :bytes}) - {:ok, result2} = Cachex.inspect(cache, {:memory, :binary}) - {:ok, result3} = Cachex.inspect(cache, {:memory, :words}) + result1 = Cachex.inspect(cache, {:memory, :bytes}) + result2 = Cachex.inspect(cache, {:memory, :binary}) + result3 = Cachex.inspect(cache, {:memory, :words}) # the first result should be a number of bytes - assert(is_positive_integer(result1)) + assert is_positive_integer(result1) # the second result should be a human readable representation - assert(result2 =~ ~r/\d+.\d{2} KiB/) + assert result2 =~ ~r/\d+.\d{2} KiB/ # fetch the system word size wsize = :erlang.system_info(:wordsize) @@ -92,7 +79,7 @@ defmodule Cachex.Actions.InspectTest do words = div(result1, wsize) # the third should be a number of words - assert(result3 == words) + assert result3 == words end # This test verifies that we can retrieve a raw cache record without doing any @@ -106,24 +93,20 @@ defmodule Cachex.Actions.InspectTest do ctime = now() # set a cache record - {:ok, true} = Cachex.put(cache, 1, "one", expire: 1000) - - # fetch some records - record1 = Cachex.inspect(cache, {:entry, 1}) - record2 = Cachex.inspect(cache, {:entry, 2}) + assert Cachex.put(cache, 1, "one", expire: 1000) # break down the first record - {:ok, entry(key: key, modified: mod, expiration: exp, value: value)} = - record1 + entry(key: key, modified: mod, expiration: exp, value: value) = + Cachex.inspect(cache, {:entry, 1}) # verify the first record - assert(key == 1) - assert_in_delta(mod, ctime, 2) - assert(exp == 1000) - assert(value == "one") + assert key == 1 + assert_in_delta mod, ctime, 2 + assert exp == 1000 + assert value == "one" # the second should be nil - assert(record2 == {:ok, nil}) + assert Cachex.inspect(cache, {:entry, 2}) == nil end # This test simply ensures that inspecting the cache state will return you the @@ -134,7 +117,7 @@ defmodule Cachex.Actions.InspectTest do cache = TestUtils.create_cache() # retrieve the cache state - state1 = Services.Overseer.retrieve(cache) + state1 = Services.Overseer.lookup(cache) # update the state to have a different setting state2 = @@ -142,14 +125,11 @@ defmodule Cachex.Actions.InspectTest do cache(state, transactions: true) end) - # retrieve the state via inspection - result = Cachex.inspect(state1, :cache) - # ensure the states don't match - assert(result != {:ok, state1}) + assert Cachex.inspect(state1, :cache) != state1 # the result should be using the latest state - assert(result == {:ok, state2}) + assert Cachex.inspect(state1, :cache) == state2 end # This test just verifies that we return an invalid option error when the value @@ -158,11 +138,8 @@ defmodule Cachex.Actions.InspectTest do # create a test cache cache = TestUtils.create_cache() - # retrieve an invalid option - result = Cachex.inspect(cache, :invalid) - # check the result is an error - assert(result == {:error, :invalid_option}) + assert Cachex.inspect(cache, :invalid) == {:error, :invalid_option} end # This test verifies that the inspector always runs locally. We @@ -174,17 +151,14 @@ defmodule Cachex.Actions.InspectTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 2, 2) # lookup both entries on the local node - {:ok, entry1} = Cachex.inspect(cache, {:entry, 1}) - {:ok, entry2} = Cachex.inspect(cache, {:entry, 2}) + entry1 = Cachex.inspect(cache, {:entry, 1}) + entry2 = Cachex.inspect(cache, {:entry, 2}) # only one of them should be correctly found - assert( - (entry1 == nil && entry2 != nil) || - (entry2 == nil && entry1 != nil) - ) + assert (entry1 == nil && entry2 != nil) || (entry2 == nil && entry1 != nil) end end diff --git a/test/cachex/actions/invoke_test.exs b/test/cachex/actions/invoke_test.exs index e7438f2f..548dea2b 100644 --- a/test/cachex/actions/invoke_test.exs +++ b/test/cachex/actions/invoke_test.exs @@ -17,35 +17,25 @@ defmodule Cachex.Actions.InvokeTest do ) # set a list inside the cache - {:ok, true} = Cachex.put(cache, "list", [1, 2, 3, 4]) + assert Cachex.put(cache, "list", [1, 2, 3, 4]) == :ok # retrieve the raw record entry(key: "list", modified: modified) = - Cachex.inspect!(cache, {:entry, "list"}) - - # execute some custom commands - lpop1 = Cachex.invoke(cache, :lpop, "list") - lpop2 = Cachex.invoke(cache, :lpop, "list") - rpop1 = Cachex.invoke(cache, :rpop, "list") - rpop2 = Cachex.invoke(cache, :rpop, "list") + Cachex.inspect(cache, {:entry, "list"}) # verify that all results are as expected - assert(lpop1 == {:ok, 1}) - assert(lpop2 == {:ok, 2}) - assert(rpop1 == {:ok, 4}) - assert(rpop2 == {:ok, 3}) + assert Cachex.invoke(cache, :lpop, "list") == 1 + assert Cachex.invoke(cache, :lpop, "list") == 2 + assert Cachex.invoke(cache, :rpop, "list") == 4 + assert Cachex.invoke(cache, :rpop, "list") == 3 # verify the modified time was unchanged - assert Cachex.inspect!(cache, {:entry, "list"}) == + assert Cachex.inspect(cache, {:entry, "list"}) == entry(key: "list", modified: modified, value: []) # pop some extras to test avoiding writes - lpop3 = Cachex.invoke(cache, :lpop, "list") - rpop3 = Cachex.invoke(cache, :rpop, "list") - - # verify we stayed the same - assert(lpop3 == {:ok, nil}) - assert(rpop3 == {:ok, nil}) + assert Cachex.invoke(cache, :lpop, "list") == nil + assert Cachex.invoke(cache, :rpop, "list") == nil end # This test covers the ability to run commands tagged with the `:return type. @@ -63,13 +53,10 @@ defmodule Cachex.Actions.InvokeTest do # define a validation function validate = fn list, expected -> # set a list inside the cache - {:ok, true} = Cachex.put(cache, "list", list) - - # retrieve the last value - last = Cachex.invoke(cache, :last, "list") + assert Cachex.put(cache, "list", list) == :ok - # compare with the expected - assert(last == {:ok, expected}) + # retrieve the last value, compare with the expected + assert Cachex.invoke(cache, :last, "list") == expected end # ensure basic list works @@ -88,7 +75,7 @@ defmodule Cachex.Actions.InvokeTest do cache = TestUtils.create_cache() # retrieve the state - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # modify the state to have fake commands state = @@ -100,16 +87,11 @@ defmodule Cachex.Actions.InvokeTest do ) # try to invoke a missing command - invoke1 = Cachex.invoke(state, :unknowns, "heh") + assert Cachex.invoke(state, :unknowns, "heh") == {:error, :invalid_command} # try to invoke bad arity commands - invoke2 = Cachex.invoke(state, :fake_mod, "heh") - invoke3 = Cachex.invoke(state, :fake_ret, "heh") - - # all should error - assert(invoke1 == {:error, :invalid_command}) - assert(invoke2 == {:error, :invalid_command}) - assert(invoke3 == {:error, :invalid_command}) + assert Cachex.invoke(state, :fake_mod, "heh") == {:error, :invalid_command} + assert Cachex.invoke(state, :fake_ret, "heh") == {:error, :invalid_command} end # This test verifies that this action is correctly distributed across @@ -126,16 +108,12 @@ defmodule Cachex.Actions.InvokeTest do ) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, [1, 2, 3]) - {:ok, true} = Cachex.put(cache, 2, [4, 5, 6]) + assert Cachex.put(cache, 1, [1, 2, 3]) == :ok + assert Cachex.put(cache, 2, [4, 5, 6]) == :ok # check the results from both keys in the nodes - last1 = Cachex.invoke(cache, :last, 1) - last2 = Cachex.invoke(cache, :last, 2) - - # check the command results - assert(last1 == {:ok, 3}) - assert(last2 == {:ok, 6}) + assert Cachex.invoke(cache, :last, 1) == 3 + assert Cachex.invoke(cache, :last, 2) == 6 end # A simple left pop for a List to remove the head and return the tail as the diff --git a/test/cachex/actions/keys_test.exs b/test/cachex/actions/keys_test.exs index 7d9c3b20..66f8225e 100644 --- a/test/cachex/actions/keys_test.exs +++ b/test/cachex/actions/keys_test.exs @@ -13,14 +13,14 @@ defmodule Cachex.Actions.KeysTest do cache = TestUtils.create_cache(hooks: [hook]) # fill with some items - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) - {:ok, true} = Cachex.put(cache, 3, 3) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok + assert Cachex.put(cache, 3, 3) == :ok # add some expired items - {:ok, true} = Cachex.put(cache, 4, 4, expire: 1) - {:ok, true} = Cachex.put(cache, 5, 5, expire: 1) - {:ok, true} = Cachex.put(cache, 6, 6, expire: 1) + assert Cachex.put(cache, 4, 4, expire: 1) + assert Cachex.put(cache, 5, 5, expire: 1) + assert Cachex.put(cache, 6, 6, expire: 1) # let entries expire :timer.sleep(2) @@ -29,19 +29,13 @@ defmodule Cachex.Actions.KeysTest do TestUtils.flush() # retrieve the keys - {status, keys} = Cachex.keys(cache) - - # ensure the status is ok - assert(status == :ok) - - # sort the keys - result = Enum.sort(keys) + keys = Cachex.keys(cache) # only 3 items should come back - assert(result == [1, 2, 3]) + assert Enum.sort(keys) == [1, 2, 3] # verify the hooks were updated with the count - assert_receive({{:keys, [[]]}, {^status, ^keys}}) + assert_receive {{:keys, [[]]}, ^keys} end # This test verifies that the distributed router correctly controls @@ -55,37 +49,37 @@ defmodule Cachex.Actions.KeysTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # retrieve the keys from both local & remote - {:ok, keys1} = Cachex.keys(cache, local: true) - {:ok, keys2} = Cachex.keys(cache, local: false) + keys1 = Cachex.keys(cache, local: true) + keys2 = Cachex.keys(cache, local: false) # local just one, cluster has two - assert(length(keys1) == 1) - assert(length(keys2) == 2) + assert length(keys1) == 1 + assert length(keys2) == 2 # delete the single local key - {:ok, 1} = Cachex.clear(cache, local: true) + assert Cachex.clear(cache, local: true) == 1 # retrieve the keys again from both local & remote - {:ok, keys3} = Cachex.keys(cache, local: true) - {:ok, keys4} = Cachex.keys(cache, local: false) + keys3 = Cachex.keys(cache, local: true) + keys4 = Cachex.keys(cache, local: false) # now local has no keys - assert(length(keys3) == 0) - assert(length(keys4) == 1) + assert length(keys3) == 0 + assert length(keys4) == 1 # delete the remaining key inside the cluster - {:ok, 1} = Cachex.clear(cache, local: false) + assert Cachex.clear(cache, local: false) == 1 # retrieve the keys again from both local & remote - {:ok, keys5} = Cachex.keys(cache, local: true) - {:ok, keys6} = Cachex.keys(cache, local: false) + keys5 = Cachex.keys(cache, local: true) + keys6 = Cachex.keys(cache, local: false) # now both don't have any keys - assert(length(keys5) == 0) - assert(length(keys6) == 0) + assert length(keys5) == 0 + assert length(keys6) == 0 end end diff --git a/test/cachex/actions/persist_test.exs b/test/cachex/actions/persist_test.exs index 553d1d64..18eb2343 100644 --- a/test/cachex/actions/persist_test.exs +++ b/test/cachex/actions/persist_test.exs @@ -12,46 +12,31 @@ defmodule Cachex.Actions.PersistTest do cache = TestUtils.create_cache(hooks: [hook]) # add some keys to the cache - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1000) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1000) == :ok # clear messages TestUtils.flush() - # retrieve all TTLs from the cache - ttl1 = Cachex.ttl!(cache, 1) - ttl2 = Cachex.ttl!(cache, 2) - # the first TTL should be nil - assert(ttl1 == nil) + assert Cachex.ttl(cache, 1) == nil # the second TTL should be roughly 1000 - assert_in_delta(ttl2, 995, 6) + assert_in_delta Cachex.ttl(cache, 2), 995, 6 # remove the TTLs - persist1 = Cachex.persist(cache, 1) - persist2 = Cachex.persist(cache, 2) - persist3 = Cachex.persist(cache, 3) - - # the first two writes should succeed - assert(persist1 == {:ok, true}) - assert(persist2 == {:ok, true}) - - # the third shouldn't, as it's missing - assert(persist3 == {:ok, false}) + assert Cachex.persist(cache, 1) + assert Cachex.persist(cache, 2) + refute Cachex.persist(cache, 3) # verify the hooks were updated with the message - assert_receive({{:persist, [1, []]}, ^persist1}) - assert_receive({{:persist, [2, []]}, ^persist2}) - assert_receive({{:persist, [3, []]}, ^persist3}) - - # retrieve all TTLs from the cache - ttl3 = Cachex.ttl!(cache, 1) - ttl4 = Cachex.ttl!(cache, 2) + assert_receive {{:persist, [1, []]}, true} + assert_receive {{:persist, [2, []]}, true} + assert_receive {{:persist, [3, []]}, false} # both TTLs should now be nil - assert(ttl3 == nil) - assert(ttl4 == nil) + assert Cachex.ttl(cache, 1) == nil + assert Cachex.ttl(cache, 2) == nil end # This test verifies that this action is correctly distributed across @@ -63,19 +48,15 @@ defmodule Cachex.Actions.PersistTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1, expire: 5000) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 5000) + assert Cachex.put(cache, 1, 1, expire: 5000) == :ok + assert Cachex.put(cache, 2, 2, expire: 5000) == :ok # remove expirations on both keys - {:ok, true} = Cachex.persist(cache, 1) - {:ok, true} = Cachex.persist(cache, 2) + assert Cachex.persist(cache, 1) + assert Cachex.persist(cache, 2) # check the expiration of each key in the cluster - {:ok, expiration1} = Cachex.ttl(cache, 1) - {:ok, expiration2} = Cachex.ttl(cache, 2) - - # both have an expiration - assert(expiration1 == nil) - assert(expiration2 == nil) + assert Cachex.ttl(cache, 1) == nil + assert Cachex.ttl(cache, 2) == nil end end diff --git a/test/cachex/actions/prune_test.exs b/test/cachex/actions/prune_test.exs index 0d51c9b6..152d23e4 100644 --- a/test/cachex/actions/prune_test.exs +++ b/test/cachex/actions/prune_test.exs @@ -7,17 +7,17 @@ defmodule Cachex.Actions.PruneTest do # insert 100 keys for i <- 1..100 do - Cachex.put!(cache, i, i) + assert Cachex.put(cache, i, i) == :ok end # guarantee we have 100 keys in the cache - assert Cachex.size(cache) == {:ok, 100} + assert Cachex.size(cache) == 100 # trigger a pruning down to 50 keys - assert Cachex.prune(cache, 50) == {:ok, true} + assert Cachex.prune(cache, 50) == 55 # verify that we're down to 50 keys - assert Cachex.size(cache) == {:ok, 45} + assert Cachex.size(cache) == 45 end test "pruning a cache to a size with a custom reclaim" do @@ -26,17 +26,17 @@ defmodule Cachex.Actions.PruneTest do # insert 100 keys for i <- 1..100 do - Cachex.put!(cache, i, i) + assert Cachex.put(cache, i, i) == :ok end # guarantee we have 100 keys in the cache - assert Cachex.size(cache) == {:ok, 100} + assert Cachex.size(cache) == 100 # trigger a pruning down to 50 keys, reclaiming 10% - assert Cachex.prune(cache, 50, reclaim: 0) == {:ok, true} + assert Cachex.prune(cache, 50, reclaim: 0) == 50 # verify that we're down to 50 keys - assert Cachex.size(cache) == {:ok, 50} + assert Cachex.size(cache) == 50 end # This test ensures that the cache eviction policy will evict any expired values @@ -50,12 +50,12 @@ defmodule Cachex.Actions.PruneTest do cache = TestUtils.create_cache() # retrieve the cache state - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # set 50 keys without ttl for x <- 1..50 do # set the key - {:ok, true} = Cachex.put(state, x, x) + assert Cachex.put(state, x, x) == :ok # tick to make sure each has a new touch time :timer.sleep(1) @@ -64,36 +64,30 @@ defmodule Cachex.Actions.PruneTest do # set a more recent 50 keys for x <- 51..100 do # set the key - {:ok, true} = Cachex.put(state, x, x, expire: 1) + assert Cachex.put(state, x, x, expire: 1) == :ok # tick to make sure each has a new touch time :timer.sleep(1) end # retrieve the cache size - size1 = Cachex.size!(cache) - - # verify the cache size - assert(size1 == 100) + assert Cachex.size(cache) == 100 # add a new key to the cache to trigger oversize - {:ok, true} = Cachex.put(state, 101, 101) + assert Cachex.put(state, 101, 101) == :ok # trigger the cache pruning down to 100 records - {:ok, true} = Cachex.prune(cache, 100, reclaim: 0.3, buffer: -1) + assert Cachex.prune(cache, 100, reclaim: 0.3, buffer: -1) == 0 # verify the cache shrinks to 51% - assert Cachex.size(state) == {:ok, 51} + assert Cachex.size(state) == 51 # our validation step validate = fn range, expected -> # iterate all keys in the range for x <- range do - # retrieve whether the key exists - exists = Cachex."exists?!"(state, x) - - # verify whether it exists - assert(exists == expected) + # retrieve whether the key exists and verify + assert Cachex.exists?(state, x) == expected end end diff --git a/test/cachex/actions/purge_test.exs b/test/cachex/actions/purge_test.exs index ef221327..dbf4baf9 100644 --- a/test/cachex/actions/purge_test.exs +++ b/test/cachex/actions/purge_test.exs @@ -13,37 +13,28 @@ defmodule Cachex.Actions.PurgeTest do cache = TestUtils.create_cache(hooks: [hook]) # add a new cache entry - {:ok, true} = Cachex.put(cache, "key", "value", expire: 25) + assert Cachex.put(cache, "key", "value", expire: 25) == :ok # flush messages TestUtils.flush() # purge before the entry expires - purge1 = Cachex.purge(cache) - - # verify that the purge removed nothing - assert(purge1 == {:ok, 0}) + assert Cachex.purge(cache) == 0 # ensure we received a message - assert_receive({{:purge, [[]]}, {:ok, 0}}) + assert_receive {{:purge, [[]]}, 0} # wait until the entry has expired :timer.sleep(50) # purge after the entry expires - purge2 = Cachex.purge(cache) - - # verify that the purge removed the key - assert(purge2 == {:ok, 1}) + assert Cachex.purge(cache) == 1 # ensure we received a message - assert_receive({{:purge, [[]]}, {:ok, 1}}) + assert_receive {{:purge, [[]]}, 1} - # check whether the key exists - exists = Cachex.exists?(cache, "key") - - # verify that the key is gone - assert(exists == {:ok, false}) + # check whether the key exists, verify that the key is gone + refute Cachex.exists?(cache, "key") end # This test verifies that the distributed router correctly controls @@ -57,21 +48,17 @@ defmodule Cachex.Actions.PurgeTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1, expire: 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1) + assert Cachex.put(cache, 1, 1, expire: 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1) == :ok # retrieve the cache size, should be 2 - {:ok, 2} = Cachex.size(cache) + assert Cachex.size(cache) == 2 # give it a few ms to expire... :timer.sleep(5) # purge just the local cache to start with - purge1 = Cachex.purge(cache, local: true) - purge2 = Cachex.purge(cache, local: false) - - # check the local removed 1 - assert(purge1 == {:ok, 1}) - assert(purge2 == {:ok, 1}) + assert Cachex.purge(cache, local: true) == 1 + assert Cachex.purge(cache, local: false) == 1 end end diff --git a/test/cachex/actions/put_many_test.exs b/test/cachex/actions/put_many_test.exs index 69cd9fb6..3ce5d36e 100644 --- a/test/cachex/actions/put_many_test.exs +++ b/test/cachex/actions/put_many_test.exs @@ -20,68 +20,42 @@ defmodule Cachex.Actions.PutManyTest do ) # set some values in the cache - set1 = Cachex.put_many(cache1, [{1, 1}, {2, 2}]) - set2 = Cachex.put_many(cache1, [{3, 3}, {4, 4}], expire: 5000) - set3 = Cachex.put_many(cache2, [{1, 1}, {2, 2}]) - set4 = Cachex.put_many(cache2, [{3, 3}, {4, 4}], expire: 5000) - - # ensure all set actions worked - assert(set1 == {:ok, true}) - assert(set2 == {:ok, true}) - assert(set3 == {:ok, true}) - assert(set4 == {:ok, true}) + assert Cachex.put_many(cache1, [{1, 1}, {2, 2}]) == :ok + assert Cachex.put_many(cache1, [{3, 3}, {4, 4}], expire: 5000) == :ok + assert Cachex.put_many(cache2, [{1, 1}, {2, 2}]) == :ok + assert Cachex.put_many(cache2, [{3, 3}, {4, 4}], expire: 5000) == :ok # verify the hooks were updated with the message - assert_receive({{:put_many, [[{1, 1}, {2, 2}], []]}, ^set1}) - assert_receive({{:put_many, [[{1, 1}, {2, 2}], []]}, ^set3}) - assert_receive({{:put_many, [[{3, 3}, {4, 4}], [expire: 5000]]}, ^set2}) - assert_receive({{:put_many, [[{3, 3}, {4, 4}], [expire: 5000]]}, ^set4}) + assert_receive {{:put_many, [[{1, 1}, {2, 2}], []]}, :ok} + assert_receive {{:put_many, [[{1, 1}, {2, 2}], []]}, :ok} + assert_receive {{:put_many, [[{3, 3}, {4, 4}], [expire: 5000]]}, :ok} + assert_receive {{:put_many, [[{3, 3}, {4, 4}], [expire: 5000]]}, :ok} # read back all values from the cache - value1 = Cachex.get(cache1, 1) - value2 = Cachex.get(cache1, 2) - value3 = Cachex.get(cache1, 3) - value4 = Cachex.get(cache1, 4) - value5 = Cachex.get(cache2, 1) - value6 = Cachex.get(cache2, 2) - value7 = Cachex.get(cache2, 3) - value8 = Cachex.get(cache2, 4) - - # verify all values exist - assert(value1 == {:ok, 1}) - assert(value2 == {:ok, 2}) - assert(value3 == {:ok, 3}) - assert(value4 == {:ok, 4}) - assert(value5 == {:ok, 1}) - assert(value6 == {:ok, 2}) - assert(value7 == {:ok, 3}) - assert(value8 == {:ok, 4}) - - # read back all key TTLs - ttl1 = Cachex.ttl!(cache1, 1) - ttl2 = Cachex.ttl!(cache1, 2) - ttl3 = Cachex.ttl!(cache1, 3) - ttl4 = Cachex.ttl!(cache1, 4) - ttl5 = Cachex.ttl!(cache2, 1) - ttl6 = Cachex.ttl!(cache2, 2) - ttl7 = Cachex.ttl!(cache2, 3) - ttl8 = Cachex.ttl!(cache2, 4) + assert Cachex.get(cache1, 1) == 1 + assert Cachex.get(cache1, 2) == 2 + assert Cachex.get(cache1, 3) == 3 + assert Cachex.get(cache1, 4) == 4 + assert Cachex.get(cache2, 1) == 1 + assert Cachex.get(cache2, 2) == 2 + assert Cachex.get(cache2, 3) == 3 + assert Cachex.get(cache2, 4) == 4 # the first two should have no TTL - assert(ttl1 == nil) - assert(ttl2 == nil) + assert Cachex.ttl(cache1, 1) == nil + assert Cachex.ttl(cache1, 2) == nil # the second two should have a TTL around 5s - assert_in_delta(ttl3, 5000, 10) - assert_in_delta(ttl4, 5000, 10) + assert_in_delta Cachex.ttl(cache1, 3), 5000, 10 + assert_in_delta Cachex.ttl(cache1, 4), 5000, 10 # the third two should have a TTL around 10s - assert_in_delta(ttl5, 10000, 10) - assert_in_delta(ttl6, 10000, 10) + assert_in_delta Cachex.ttl(cache2, 1), 10000, 10 + assert_in_delta Cachex.ttl(cache2, 2), 10000, 10 # the last two should have a TTL around 5s - assert_in_delta(ttl7, 5000, 10) - assert_in_delta(ttl8, 5000, 10) + assert_in_delta Cachex.ttl(cache2, 3), 5000, 10 + assert_in_delta Cachex.ttl(cache2, 4), 5000, 10 end # This should no-op to avoid a crashing write, whilst @@ -91,10 +65,7 @@ defmodule Cachex.Actions.PutManyTest do cache = TestUtils.create_cache() # try set some values in the cache - result = Cachex.put_many(cache, []) - - # should work, but no writes - assert(result == {:ok, false}) + assert Cachex.put_many(cache, []) == :ok end # Since we have a hard requirement on the format of a batch, we @@ -108,17 +79,13 @@ defmodule Cachex.Actions.PutManyTest do cache = TestUtils.create_cache(hooks: [hook]) # try set some values in the cache - set1 = Cachex.put_many(cache, [{1, 1}, "key"]) - set2 = Cachex.put_many(cache, [{1, 1}, {2, 2, 2}]) - - # ensure all set actions failed - assert(set1 == error(:invalid_pairs)) - assert(set2 == error(:invalid_pairs)) + assert Cachex.put_many(cache, [{1, 1}, "key"]) == error(:invalid_pairs) + assert Cachex.put_many(cache, [{1, 1}, {2, 2, 2}]) == error(:invalid_pairs) # try without a list of pairs - assert_raise(FunctionClauseError, fn -> + assert_raise FunctionClauseError, fn -> Cachex.put_many(cache, {1, 1}) - end) + end end # This test verifies that this action is correctly distributed across @@ -130,15 +97,11 @@ defmodule Cachex.Actions.PutManyTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 2 & 3 hash to the same slots - {:ok, true} = Cachex.put_many(cache, [{2, 2}, {3, 3}]) + assert Cachex.put_many(cache, [{2, 2}, {3, 3}]) == :ok # try to retrieve both of the set keys - get1 = Cachex.get(cache, 2) - get2 = Cachex.get(cache, 3) - - # both should come back - assert(get1 == {:ok, 2}) - assert(get2 == {:ok, 3}) + assert Cachex.get(cache, 2) == 2 + assert Cachex.get(cache, 3) == 3 end # This test verifies that all keys in a put_many/3 must hash to the @@ -148,10 +111,7 @@ defmodule Cachex.Actions.PutManyTest do # create a new cache cluster for cleaning {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) - # we know that 1 & 3 don't hash to the same slots - put_many = Cachex.put_many(cache, [{1, 1}, {3, 3}]) - - # so there should be an error - assert(put_many == {:error, :cross_slot}) + # we know that 1 & 3 don't hash to the same slots, so there should be an error + assert Cachex.put_many(cache, [{1, 1}, {3, 3}]) == {:error, :cross_slot} end end diff --git a/test/cachex/actions/put_test.exs b/test/cachex/actions/put_test.exs index e9fd6e74..ff83d1a5 100644 --- a/test/cachex/actions/put_test.exs +++ b/test/cachex/actions/put_test.exs @@ -20,52 +20,34 @@ defmodule Cachex.Actions.PutTest do ) # set some values in the cache - set1 = Cachex.put(cache1, 1, 1) - set2 = Cachex.put(cache1, 2, 2, expire: 5000) - set3 = Cachex.put(cache2, 1, 1) - set4 = Cachex.put(cache2, 2, 2, expire: 5000) - - # ensure all set actions worked - assert(set1 == {:ok, true}) - assert(set2 == {:ok, true}) - assert(set3 == {:ok, true}) - assert(set4 == {:ok, true}) + assert Cachex.put(cache1, 1, 1) == :ok + assert Cachex.put(cache1, 2, 2, expire: 5000) == :ok + assert Cachex.put(cache2, 1, 1) == :ok + assert Cachex.put(cache2, 2, 2, expire: 5000) == :ok # verify the hooks were updated with the message - assert_receive({{:put, [1, 1, []]}, ^set1}) - assert_receive({{:put, [1, 1, []]}, ^set3}) - assert_receive({{:put, [2, 2, [expire: 5000]]}, ^set2}) - assert_receive({{:put, [2, 2, [expire: 5000]]}, ^set4}) + assert_receive {{:put, [1, 1, []]}, :ok} + assert_receive {{:put, [1, 1, []]}, :ok} + assert_receive {{:put, [2, 2, [expire: 5000]]}, :ok} + assert_receive {{:put, [2, 2, [expire: 5000]]}, :ok} # read back all values from the cache - value1 = Cachex.get(cache1, 1) - value2 = Cachex.get(cache1, 2) - value3 = Cachex.get(cache2, 1) - value4 = Cachex.get(cache2, 2) - - # verify all values exist - assert(value1 == {:ok, 1}) - assert(value2 == {:ok, 2}) - assert(value3 == {:ok, 1}) - assert(value4 == {:ok, 2}) + assert Cachex.get(cache1, 1) == 1 + assert Cachex.get(cache1, 2) == 2 + assert Cachex.get(cache2, 1) == 1 + assert Cachex.get(cache2, 2) == 2 # read back all key TTLs - ttl1 = Cachex.ttl!(cache1, 1) - ttl2 = Cachex.ttl!(cache1, 2) - ttl3 = Cachex.ttl!(cache2, 1) - ttl4 = Cachex.ttl!(cache2, 2) - - # the first should have no TTL - assert(ttl1 == nil) + assert Cachex.ttl(cache1, 1) == nil # the second should have a TTL around 5s - assert_in_delta(ttl2, 5000, 10) + assert_in_delta Cachex.ttl(cache1, 2), 5000, 10 # the second should have a TTL around 10s - assert_in_delta(ttl3, 10000, 10) + assert_in_delta Cachex.ttl(cache2, 1), 10000, 10 # the fourth should have a TTL around 5s - assert_in_delta(ttl4, 5000, 10) + assert_in_delta Cachex.ttl(cache2, 2), 5000, 10 end # This test verifies that this action is correctly distributed across @@ -77,15 +59,11 @@ defmodule Cachex.Actions.PutTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # check the results of the calls across nodes - size1 = Cachex.size(cache, local: true) - size2 = Cachex.size(cache, local: false) - - # one local, two total - assert(size1 == {:ok, 1}) - assert(size2 == {:ok, 2}) + assert Cachex.size(cache, local: true) == 1 + assert Cachex.size(cache, local: false) == 2 end end diff --git a/test/cachex/actions/refresh_test.exs b/test/cachex/actions/refresh_test.exs index 6527ec09..05a7f67f 100644 --- a/test/cachex/actions/refresh_test.exs +++ b/test/cachex/actions/refresh_test.exs @@ -13,8 +13,8 @@ defmodule Cachex.Actions.RefreshTest do cache = TestUtils.create_cache(hooks: [hook]) # add some keys to the cache - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1000) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1000) == :ok # clear messages TestUtils.flush() @@ -22,42 +22,27 @@ defmodule Cachex.Actions.RefreshTest do # wait for 25ms :timer.sleep(25) - # retrieve all TTLs from the cache - ttl1 = Cachex.ttl!(cache, 1) - ttl2 = Cachex.ttl!(cache, 2) - # the first TTL should be nil - assert(ttl1 == nil) + assert Cachex.ttl(cache, 1) == nil # the second TTL should be roughly 975 - assert_in_delta(ttl2, 970, 6) + assert_in_delta Cachex.ttl(cache, 2), 970, 6 # refresh some TTLs - refresh1 = Cachex.refresh(cache, 1) - refresh2 = Cachex.refresh(cache, 2) - refresh3 = Cachex.refresh(cache, 3) - - # the first two writes should succeed - assert(refresh1 == {:ok, true}) - assert(refresh2 == {:ok, true}) - - # the third shouldn't, as it's missing - assert(refresh3 == {:ok, false}) + assert Cachex.refresh(cache, 1) + assert Cachex.refresh(cache, 2) + refute Cachex.refresh(cache, 3) # verify the hooks were updated with the message - assert_receive({{:refresh, [1, []]}, ^refresh1}) - assert_receive({{:refresh, [2, []]}, ^refresh2}) - assert_receive({{:refresh, [3, []]}, ^refresh3}) - - # retrieve all TTLs from the cache - ttl3 = Cachex.ttl!(cache, 1) - ttl4 = Cachex.ttl!(cache, 2) + assert_receive {{:refresh, [1, []]}, true} + assert_receive {{:refresh, [2, []]}, true} + assert_receive {{:refresh, [3, []]}, false} # the first TTL should still be nil - assert(ttl3 == nil) + assert Cachex.ttl(cache, 1) == nil # the second TTL should be reset to 1000 - assert_in_delta(ttl4, 995, 10) + assert_in_delta Cachex.ttl(cache, 2), 995, 10 end # This test verifies that this action is correctly distributed across @@ -69,34 +54,22 @@ defmodule Cachex.Actions.RefreshTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1, expire: 500) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 500) + assert Cachex.put(cache, 1, 1, expire: 500) == :ok + assert Cachex.put(cache, 2, 2, expire: 500) == :ok # pause to reduce the TTL a little :timer.sleep(250) # check the expiration of each key in the cluster - {:ok, expiration1} = Cachex.ttl(cache, 1) - {:ok, expiration2} = Cachex.ttl(cache, 2) - - # check the delta changed - assert(expiration1 < 300) - assert(expiration2 < 300) + assert Cachex.ttl(cache, 1) < 300 + assert Cachex.ttl(cache, 2) < 300 # refresh the TTL on both keys - refresh1 = Cachex.refresh(cache, 1) - refresh2 = Cachex.refresh(cache, 2) - - # check the refresh results - assert(refresh1 == {:ok, true}) - assert(refresh2 == {:ok, true}) - - # check the expiration of each key in the cluster - {:ok, expiration3} = Cachex.ttl(cache, 1) - {:ok, expiration4} = Cachex.ttl(cache, 2) + assert Cachex.refresh(cache, 1) + assert Cachex.refresh(cache, 2) # check the time reset - assert(expiration3 > 300) - assert(expiration4 > 300) + assert Cachex.ttl(cache, 1) > 300 + assert Cachex.ttl(cache, 2) > 300 end end diff --git a/test/cachex/actions/reset_test.exs b/test/cachex/actions/reset_test.exs index 76835960..bef00996 100644 --- a/test/cachex/actions/reset_test.exs +++ b/test/cachex/actions/reset_test.exs @@ -25,18 +25,18 @@ defmodule Cachex.Actions.ResetTest do ctime1 = now() # set some values - {:ok, true} = Cachex.put(cache1, 1, 1) - {:ok, true} = Cachex.put(cache2, 1, 1) + assert Cachex.put(cache1, 1, 1) == :ok + assert Cachex.put(cache2, 1, 1) == :ok # retrieve the stats - stats1 = Cachex.stats!(cache1) + stats1 = Cachex.stats(cache1) # verify the stats - assert_in_delta(stats1.meta.creation_date, ctime1, 10) + assert_in_delta stats1.meta.creation_date, ctime1, 10 # ensure the cache is not empty - refute(Cachex."empty?!"(cache1)) - refute(Cachex."empty?!"(cache2)) + refute Cachex.empty?(cache1) + refute Cachex.empty?(cache2) # wait for 10ms :timer.sleep(10) @@ -45,22 +45,18 @@ defmodule Cachex.Actions.ResetTest do ctime2 = now() # reset the whole cache - reset1 = Cachex.reset(cache1) - reset2 = Cachex.reset(cache2) - - # verify the reset - assert(reset1 == {:ok, true}) - assert(reset2 == {:ok, true}) + assert Cachex.reset(cache1) == :ok + assert Cachex.reset(cache2) == :ok # ensure the cache is reset - assert(Cachex."empty?!"(cache1)) - assert(Cachex."empty?!"(cache2)) + assert Cachex.empty?(cache1) + assert Cachex.empty?(cache2) # retrieve the stats - stats2 = Cachex.stats!(cache1) + stats2 = Cachex.stats(cache1) # verify they reset properly - assert_in_delta(stats2.meta.creation_date, ctime2, 10) + assert_in_delta stats2.meta.creation_date, ctime2, 10 end # This test ensures that we can reset a cache without touching any of the hooks @@ -80,31 +76,28 @@ defmodule Cachex.Actions.ResetTest do ctime1 = now() # set some values - {:ok, true} = Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 1, 1) == :ok # retrieve the stats - stats1 = Cachex.stats!(cache) + stats1 = Cachex.stats(cache) # verify the stats - assert_in_delta(stats1.meta.creation_date, ctime1, 5) + assert_in_delta stats1.meta.creation_date, ctime1, 5 # ensure the cache is not empty - refute(Cachex."empty?!"(cache)) + refute Cachex.empty?(cache) # reset only cache - reset1 = Cachex.reset(cache, only: :cache) - - # verify the reset - assert(reset1 == {:ok, true}) + assert Cachex.reset(cache, only: :cache) == :ok # ensure the cache is reset - assert(Cachex."empty?!"(cache)) + assert Cachex.empty?(cache) # retrieve the stats - stats2 = Cachex.stats!(cache) + stats2 = Cachex.stats(cache) # verify they didn't change - assert(stats2.meta.creation_date == stats1.meta.creation_date) + assert stats2.meta.creation_date == stats1.meta.creation_date end # This test covers the resetting of a cache's hooks, but not resetting the cache @@ -126,16 +119,16 @@ defmodule Cachex.Actions.ResetTest do ctime1 = now() # set some values - {:ok, true} = Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 1, 1) == :ok # retrieve the stats - stats1 = Cachex.stats!(cache) + stats1 = Cachex.stats(cache) # verify the stats - assert_in_delta(stats1.meta.creation_date, ctime1, 5) + assert_in_delta stats1.meta.creation_date, ctime1, 5 # ensure the cache is not empty - refute(Cachex."empty?!"(cache)) + refute Cachex.empty?(cache) # wait for 10ms :timer.sleep(10) @@ -144,34 +137,28 @@ defmodule Cachex.Actions.ResetTest do ctime2 = now() # reset only cache - reset1 = Cachex.reset(cache, only: :hooks, hooks: [MyModule]) - - # verify the reset - assert(reset1 == {:ok, true}) + assert Cachex.reset(cache, only: :hooks, hooks: [MyModule]) == :ok # ensure the cache is not reset - refute(Cachex."empty?!"(cache)) + refute Cachex.empty?(cache) # retrieve the stats - stats2 = Cachex.stats!(cache) + stats2 = Cachex.stats(cache) # verify they don't reset - assert(stats2.meta.creation_date == stats1.meta.creation_date) + assert stats2.meta.creation_date == stats1.meta.creation_date # reset without a hooks list - reset2 = Cachex.reset(cache, only: :hooks) - - # verify the reset - assert(reset2 == {:ok, true}) + assert Cachex.reset(cache, only: :hooks) == :ok # ensure the cache is not reset - refute(Cachex."empty?!"(cache)) + refute Cachex.empty?(cache) # retrieve the stats - stats3 = Cachex.stats!(cache) + stats3 = Cachex.stats(cache) # verify they don't reset - assert_in_delta(stats3.meta.creation_date, ctime2, 5) + assert_in_delta stats3.meta.creation_date, ctime2, 5 end # This test verifies that the distributed router correctly controls @@ -185,26 +172,22 @@ defmodule Cachex.Actions.ResetTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # retrieve the cache size, should be 2 - {:ok, 2} = Cachex.size(cache) + assert Cachex.size(cache) == 2 # reset just the local cache to start with - reset1 = Cachex.reset(cache, local: true) - sized1 = Cachex.size(cache) + assert Cachex.reset(cache, local: true) == :ok # check the local removal worked - assert(reset1 == {:ok, true}) - assert(sized1 == {:ok, 1}) + assert Cachex.size(cache) == 1 # reset the rest of the cluster cached - reset2 = Cachex.reset(cache, local: false) - sized2 = Cachex.size(cache) + assert Cachex.reset(cache, local: false) == :ok # check the other removals worked - assert(reset2 == {:ok, true}) - assert(sized2 == {:ok, 0}) + assert Cachex.size(cache) == 0 end end diff --git a/test/cachex/actions/restore_test.exs b/test/cachex/actions/restore_test.exs index ea420d54..7391bd7e 100644 --- a/test/cachex/actions/restore_test.exs +++ b/test/cachex/actions/restore_test.exs @@ -14,42 +14,31 @@ defmodule Cachex.Actions.RestoreTest do start = now() # add some cache entries - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1) - {:ok, true} = Cachex.put(cache, 3, 3, expire: 10_000) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1) == :ok + assert Cachex.put(cache, 3, 3, expire: 10_000) == :ok # create a local path to write to path = Path.join(tmp, TestUtils.gen_rand_bytes(8)) # save the cache to a local file - result1 = Cachex.save(cache, path) - result2 = Cachex.clear(cache) - result3 = Cachex.size(cache) + assert Cachex.save(cache, path) == :ok # verify the result and clearance - assert(result1 == {:ok, true}) - assert(result2 == {:ok, 3}) - assert(result3 == {:ok, 0}) + assert Cachex.clear(cache) == 3 + assert Cachex.size(cache) == 0 # wait a while before re-load :timer.sleep(50) # load the cache from the disk - result4 = Cachex.restore(cache, path) - result5 = Cachex.size(cache) - result6 = Cachex.ttl!(cache, 3) - - # verify that the load was ok - assert(result4 == {:ok, 2}) - assert(result5 == {:ok, 2}) + assert Cachex.restore(cache, path) == 2 + assert Cachex.size(cache) == 2 # verify TTL offsetting happens - assert_in_delta(result6, 10_000 - (now() - start), 5) + assert_in_delta Cachex.ttl(cache, 3), 10_000 - (now() - start), 5 # reload a bad file from disk (should not be trusted) - result7 = Cachex.restore(cache, tmp, trust: false) - - # verify the result failed - assert(result7 == {:error, :unreachable_file}) + assert Cachex.restore(cache, tmp, trust: false) == {:error, :unreachable_file} end end diff --git a/test/cachex/actions/save_test.exs b/test/cachex/actions/save_test.exs index 66fbe165..ba883184 100644 --- a/test/cachex/actions/save_test.exs +++ b/test/cachex/actions/save_test.exs @@ -13,28 +13,23 @@ defmodule Cachex.Actions.SaveTest do cache = TestUtils.create_cache() # add some cache entries - {:ok, true} = Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 1, 1) == :ok # create a local path to write to path = Path.join(tmp, TestUtils.gen_rand_bytes(8)) # save the cache to a local file - result1 = Cachex.save(cache, path) - result2 = Cachex.clear(cache) - result3 = Cachex.size(cache) + assert Cachex.save(cache, path) == :ok # verify the result and clearance - assert(result1 == {:ok, true}) - assert(result2 == {:ok, 1}) - assert(result3 == {:ok, 0}) + assert Cachex.clear(cache) == 1 + assert Cachex.size(cache) == 0 # load the cache from the disk - result4 = Cachex.restore(cache, path) - result5 = Cachex.size(cache) + assert Cachex.restore(cache, path) == 1 # verify that the load was ok - assert(result4 == {:ok, 1}) - assert(result5 == {:ok, 1}) + assert Cachex.size(cache) == 1 end # This test covers the backing up of a cache cluster to a local disk location. We @@ -49,42 +44,30 @@ defmodule Cachex.Actions.SaveTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # create a local path to write to path1 = Path.join(tmp, TestUtils.gen_rand_bytes(8)) path2 = Path.join(tmp, TestUtils.gen_rand_bytes(8)) # save the cache to a local file for local/remote - save1 = Cachex.save(cache, path1, local: true) - save2 = Cachex.save(cache, path2, local: false) - - # verify the save results - assert(save1 == {:ok, true}) - assert(save2 == {:ok, true}) + assert Cachex.save(cache, path1, local: true) == :ok + assert Cachex.save(cache, path2, local: false) == :ok # clear the cache to remove all - {:ok, 2} = Cachex.clear(cache) + assert Cachex.clear(cache) == 2 # load the local cache from the disk - load1 = Cachex.restore(cache, path1) - size1 = Cachex.size(cache) - - # verify that the load was ok - assert(load1 == {:ok, 1}) - assert(size1 == {:ok, 1}) + assert Cachex.restore(cache, path1) == 1 + assert Cachex.size(cache) == 1 # clear the cache again - {:ok, 1} = Cachex.clear(cache) + assert Cachex.clear(cache) == 1 # load the full cache from the disk - load2 = Cachex.restore(cache, path2) - size2 = Cachex.size(cache) - - # verify that the load was ok - assert(load2 == {:ok, 2}) - assert(size2 == {:ok, 2}) + assert Cachex.restore(cache, path2) == 2 + assert Cachex.size(cache) == 2 end test "returning an error on invalid output path" do diff --git a/test/cachex/actions/size_test.exs b/test/cachex/actions/size_test.exs index a7233944..b924c40e 100644 --- a/test/cachex/actions/size_test.exs +++ b/test/cachex/actions/size_test.exs @@ -7,26 +7,19 @@ defmodule Cachex.Actions.SizeTest do # create a test cache cache = TestUtils.create_cache() - # retrieve the cache size - result1 = Cachex.size(cache) - - # it should be empty - assert(result1 == {:ok, 0}) + # retrieve the cache size, it should be empty + assert Cachex.size(cache) == 0 # add some cache entries - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1) == :ok # wait 2 ms to expire :timer.sleep(2) # retrieve the cache size - result2 = Cachex.size(cache) - result3 = Cachex.size(cache, expired: false) - - # it should show the new key - assert(result2 == {:ok, 2}) - assert(result3 == {:ok, 1}) + assert Cachex.size(cache) == 2 + assert Cachex.size(cache, expired: false) == 1 end # This test verifies that the distributed router correctly controls @@ -40,35 +33,24 @@ defmodule Cachex.Actions.SizeTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # retrieve the cache size, should be 2 - size1 = Cachex.size(cache) - - # check the size of the cache - assert(size1 == {:ok, 2}) + assert Cachex.size(cache) == 2 # clear just the local cache to start with - {:ok, 1} = Cachex.clear(cache, local: true) + assert Cachex.clear(cache, local: true) == 1 # fetch the size of local and remote - size2 = Cachex.size(cache, local: true) - size3 = Cachex.size(cache, local: false) - - # check that the local is 0, remote is 1 - assert(size2 == {:ok, 0}) - assert(size3 == {:ok, 1}) + assert Cachex.size(cache, local: true) == 0 + assert Cachex.size(cache, local: false) == 1 # clear the entire cluster at this point - {:ok, 1} = Cachex.clear(cache) + assert Cachex.clear(cache) == 1 # fetch the size of local and remote (again) - size4 = Cachex.size(cache, local: true) - size5 = Cachex.size(cache, local: false) - - # check that both are now 0 - assert(size4 == {:ok, 0}) - assert(size5 == {:ok, 0}) + assert Cachex.size(cache, local: true) == 0 + assert Cachex.size(cache, local: false) == 0 end end diff --git a/test/cachex/actions/stats_test.exs b/test/cachex/actions/stats_test.exs index 75c4a5d5..315550e3 100644 --- a/test/cachex/actions/stats_test.exs +++ b/test/cachex/actions/stats_test.exs @@ -17,23 +17,23 @@ defmodule Cachex.Actions.StatsTest do ctime = now() # execute some cache actions - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, 1} = Cachex.get(cache, 1) + assert Cachex.put(cache, 1, 1) + assert Cachex.get(cache, 1) == 1 # retrieve default stats stats = Cachex.stats!(cache) # verify the first returns a valid meta object - assert_in_delta(stats.meta.creation_date, ctime, 5) + assert_in_delta stats.meta.creation_date, ctime, 5 # verify attached statistics - assert(stats.hits == 1) - assert(stats.misses == 0) - assert(stats.operations == 2) - assert(stats.writes == 1) + assert stats.hits == 1 + assert stats.misses == 0 + assert stats.operations == 2 + assert stats.writes == 1 # verify attached rates - assert(stats.hit_rate == 100) + assert stats.hit_rate == 100 end # This test just verifies that we receive an error trying to retrieve stats @@ -42,11 +42,8 @@ defmodule Cachex.Actions.StatsTest do # create a test cache cache = TestUtils.create_cache(stats: false) - # retrieve default stats - stats = Cachex.stats(cache) - - # we should receive an error - assert(stats == {:error, :stats_disabled}) + # retrieve default stats, we should receive an error + assert Cachex.stats(cache) == {:error, :stats_disabled} end # This test verifies that we correctly handle hit/miss rates when there are 0 @@ -63,25 +60,25 @@ defmodule Cachex.Actions.StatsTest do cache4 = TestUtils.create_cache(hooks: [hook]) # set cache1 to 100% misses - {:ok, nil} = Cachex.get(cache1, 1) + assert Cachex.get(cache1, 1) == nil # set cache2 to 100% hits - {:ok, true} = Cachex.put(cache2, 1, 1) - {:ok, 1} = Cachex.get(cache2, 1) + assert Cachex.put(cache2, 1, 1) == :ok + assert Cachex.get(cache2, 1) == 1 # set cache3 to be 50% each way - {:ok, true} = Cachex.put(cache3, 1, 1) - {:ok, 1} = Cachex.get(cache3, 1) - {:ok, nil} = Cachex.get(cache3, 2) + assert Cachex.put(cache3, 1, 1) == :ok + assert Cachex.get(cache3, 1) == 1 + assert Cachex.get(cache3, 2) == nil # set cache4 to have some loads - {:commit, 1} = Cachex.fetch(cache4, 1, & &1) + assert Cachex.fetch(cache4, 1, & &1) == {:commit, 1} # retrieve all cache rates - stats1 = Cachex.stats!(cache1) - stats2 = Cachex.stats!(cache2) - stats3 = Cachex.stats!(cache3) - stats4 = Cachex.stats!(cache4) + stats1 = Cachex.stats(cache1) + stats2 = Cachex.stats(cache2) + stats3 = Cachex.stats(cache3) + stats4 = Cachex.stats(cache4) # remove the metadata from the stats stats1 = Map.delete(stats1, :meta) diff --git a/test/cachex/actions/stream_test.exs b/test/cachex/actions/stream_test.exs index fb03add4..8ec75d60 100644 --- a/test/cachex/actions/stream_test.exs +++ b/test/cachex/actions/stream_test.exs @@ -9,23 +9,22 @@ defmodule Cachex.Actions.StreamTest do cache = TestUtils.create_cache() # add some keys to the cache - {:ok, true} = Cachex.put(cache, "key1", "value1") - {:ok, true} = Cachex.put(cache, "key2", "value2") - {:ok, true} = Cachex.put(cache, "key3", "value3") + assert Cachex.put(cache, "key1", "value1") == :ok + assert Cachex.put(cache, "key2", "value2") == :ok + assert Cachex.put(cache, "key3", "value3") == :ok - # grab the raw versions of each record - {:ok, entry1} = Cachex.inspect(cache, {:entry, "key1"}) - {:ok, entry2} = Cachex.inspect(cache, {:entry, "key2"}) - {:ok, entry3} = Cachex.inspect(cache, {:entry, "key3"}) - - # create a cache stream - {:ok, stream} = Cachex.stream(cache) - - # consume the stream - result = Enum.sort(stream) + # create and consume a cache stream + result = + cache + |> Cachex.stream() + |> Enum.sort() # verify the results are the ordered entries - assert(result == [entry1, entry2, entry3]) + assert result == [ + Cachex.inspect(cache, {:entry, "key1"}), + Cachex.inspect(cache, {:entry, "key2"}), + Cachex.inspect(cache, {:entry, "key3"}) + ] end # This test covers the use case of custom match patterns, by testing various @@ -36,9 +35,9 @@ defmodule Cachex.Actions.StreamTest do cache = TestUtils.create_cache() # add some keys to the cache - {:ok, true} = Cachex.put(cache, "key1", "value1") - {:ok, true} = Cachex.put(cache, "key2", "value2") - {:ok, true} = Cachex.put(cache, "key3", "value3") + assert Cachex.put(cache, "key1", "value1") == :ok + assert Cachex.put(cache, "key2", "value2") == :ok + assert Cachex.put(cache, "key3", "value3") == :ok # create our query filter filter = Cachex.Query.unexpired() @@ -48,24 +47,18 @@ defmodule Cachex.Actions.StreamTest do query2 = Cachex.Query.build(where: filter, output: :key) # create cache streams - {:ok, stream1} = Cachex.stream(cache, query1) - {:ok, stream2} = Cachex.stream(cache, query2) - - # consume the streams - result1 = Enum.sort(stream1) - result2 = Enum.sort(stream2) + stream1 = Cachex.stream(cache, query1) + stream2 = Cachex.stream(cache, query2) # verify the first results - assert( - result1 == [ - {"key1", "value1"}, - {"key2", "value2"}, - {"key3", "value3"} - ] - ) + assert Enum.sort(stream1) == [ + {"key1", "value1"}, + {"key2", "value2"}, + {"key3", "value3"} + ] # verify the second results - assert(result2 == ["key1", "key2", "key3"]) + assert Enum.sort(stream2) == ["key1", "key2", "key3"] end # If an invalid match spec is provided in the of option, an error is returned. @@ -74,11 +67,8 @@ defmodule Cachex.Actions.StreamTest do # create a test cache cache = TestUtils.create_cache() - # create cache stream - result = Cachex.stream(cache, {:invalid}) - - # verify the stream fails - assert(result == {:error, :invalid_match}) + # create cache stream, verify the stream fails + assert Cachex.stream(cache, {:invalid}) == {:error, :invalid_match} end # This test verifies that this action is correctly disabled in a cluster, @@ -101,10 +91,7 @@ defmodule Cachex.Actions.StreamTest do # build a generic query to use later query = Cachex.Query.build() - # create a cache stream with the local flag - stream = Cachex.stream(cache, query, local: true) - - # we should be able to stream the local node - assert stream != {:error, :non_distributed} + # create a cache stream with the local flag, we should be able to stream + assert Cachex.stream(cache, query, local: true) != {:error, :non_distributed} end end diff --git a/test/cachex/actions/take_test.exs b/test/cachex/actions/take_test.exs index 2c7249e8..9d7e17c6 100644 --- a/test/cachex/actions/take_test.exs +++ b/test/cachex/actions/take_test.exs @@ -12,8 +12,8 @@ defmodule Cachex.Actions.TakeTest do cache = TestUtils.create_cache(hooks: [hook]) # set some keys in the cache - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1) == :ok # wait for the TTL to pass :timer.sleep(2) @@ -22,36 +22,24 @@ defmodule Cachex.Actions.TakeTest do TestUtils.flush() # take the first and second key - result1 = Cachex.take(cache, 1) - result2 = Cachex.take(cache, 2) + assert Cachex.take(cache, 1) == 1 + assert Cachex.take(cache, 2) == nil # take a missing key - result3 = Cachex.take(cache, 3) - - # verify the first key is retrieved - assert(result1 == {:ok, 1}) - - # verify the second and third keys are missing - assert(result2 == {:ok, nil}) - assert(result3 == {:ok, nil}) + assert Cachex.take(cache, 3) == nil # assert we receive valid notifications - assert_receive({{:take, [1, []]}, ^result1}) - assert_receive({{:take, [2, []]}, ^result2}) - assert_receive({{:take, [3, []]}, ^result3}) + assert_receive {{:take, [1, []]}, 1} + assert_receive {{:take, [2, []]}, nil} + assert_receive {{:take, [3, []]}, nil} # check we received valid purge actions for the TTL - assert_receive({{:purge, [[]]}, {:ok, 1}}) + assert_receive {{:purge, [[]]}, 1} # ensure that the keys no longer exist in the cache - exists1 = Cachex.exists?(cache, 1) - exists2 = Cachex.exists?(cache, 2) - exists3 = Cachex.exists?(cache, 3) - - # none should exist - assert(exists1 == {:ok, false}) - assert(exists2 == {:ok, false}) - assert(exists3 == {:ok, false}) + refute Cachex.exists?(cache, 1) + refute Cachex.exists?(cache, 2) + refute Cachex.exists?(cache, 3) end # This test verifies that this action is correctly distributed across @@ -63,31 +51,19 @@ defmodule Cachex.Actions.TakeTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # check the results of the calls across nodes - size1 = Cachex.size(cache, local: true) - size2 = Cachex.size(cache, local: false) - - # one local, two total - assert(size1 == {:ok, 1}) - assert(size2 == {:ok, 2}) + assert Cachex.size(cache, local: true) == 1 + assert Cachex.size(cache, local: false) == 2 # take each item from the cache cluster - take1 = Cachex.take(cache, 1) - take2 = Cachex.take(cache, 2) - - # check both records are taken - assert(take1 == {:ok, 1}) - assert(take2 == {:ok, 2}) + assert Cachex.take(cache, 1) == 1 + assert Cachex.take(cache, 2) == 2 # check the results of the calls across nodes - size3 = Cachex.size(cache, local: true) - size4 = Cachex.size(cache, local: false) - - # no records are left - assert(size3 == {:ok, 0}) - assert(size4 == {:ok, 0}) + assert Cachex.size(cache, local: true) == 0 + assert Cachex.size(cache, local: false) == 0 end end diff --git a/test/cachex/actions/touch_test.exs b/test/cachex/actions/touch_test.exs index 943cbe52..a84575a0 100644 --- a/test/cachex/actions/touch_test.exs +++ b/test/cachex/actions/touch_test.exs @@ -13,11 +13,11 @@ defmodule Cachex.Actions.TouchTest do cache = TestUtils.create_cache(hooks: [hook]) # pull back the state - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # add some keys to the cache - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1000) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1000) == :ok # clear messages TestUtils.flush() @@ -30,30 +30,23 @@ defmodule Cachex.Actions.TouchTest do Cachex.Actions.read(state, 2) # the first TTL should be nil - assert(expiration1 == nil) + assert expiration1 == nil # the second TTL should be roughly 1000 - assert_in_delta(expiration2, 995, 6) + assert_in_delta expiration2, 995, 6 # wait for 50ms :timer.sleep(50) # touch the keys - touch1 = Cachex.touch(cache, 1) - touch2 = Cachex.touch(cache, 2) - touch3 = Cachex.touch(cache, 3) - - # the first two writes should succeed - assert(touch1 == {:ok, true}) - assert(touch2 == {:ok, true}) - - # the third shouldn't, as it's missing - assert(touch3 == {:ok, false}) + assert Cachex.touch(cache, 1) + assert Cachex.touch(cache, 2) + refute Cachex.touch(cache, 3) # verify the hooks were updated with the message - assert_receive({{:touch, [1, []]}, ^touch1}) - assert_receive({{:touch, [2, []]}, ^touch2}) - assert_receive({{:touch, [3, []]}, ^touch3}) + assert_receive {{:touch, [1, []]}, true} + assert_receive {{:touch, [2, []]}, true} + assert_receive {{:touch, [3, []]}, false} # retrieve the raw records again entry(modified: modified3, expiration: expiration3) = @@ -63,22 +56,19 @@ defmodule Cachex.Actions.TouchTest do Cachex.Actions.read(state, 2) # the first expiration should still be nil - assert(expiration3 == nil) + assert expiration3 == nil # the first touch time should be roughly 50ms after the first one - assert_in_delta(modified3, modified1 + 60, 11) + assert_in_delta modified3, modified1 + 60, 11 # the second expiration should be roughly 50ms lower than the first - assert_in_delta(expiration4, expiration2 - 60, 11) + assert_in_delta expiration4, expiration2 - 60, 11 # the second touch time should also be 50ms after the first one - assert_in_delta(modified4, modified2 + 60, 11) - - # for good measure, retrieve the second expiration - expiration5 = Cachex.ttl!(cache, 2) + assert_in_delta modified4, modified2 + 60, 11 # it should be roughly 945ms left - assert_in_delta(expiration5, 940, 11) + assert_in_delta Cachex.ttl(cache, 2), 940, 11 end # This test verifies that this action is correctly distributed across @@ -90,38 +80,34 @@ defmodule Cachex.Actions.TouchTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2) == :ok # wait a little :timer.sleep(10) - # pull back the records inserted so far - {:ok, export1} = Cachex.export(cache) - # sort to guarantee we're checking well - [record1, record2] = Enum.sort(export1) + [record1, record2] = + cache |> Cachex.export() |> Enum.sort() # unpack the records touch time entry(modified: modified1) = record1 entry(modified: modified2) = record2 # now touch both keys - {:ok, true} = Cachex.touch(cache, 1) - {:ok, true} = Cachex.touch(cache, 2) - - # pull back the records after the touchs - {:ok, export2} = Cachex.export(cache) + assert Cachex.touch(cache, 1) + assert Cachex.touch(cache, 2) # sort to guarantee we're checking well - [record3, record4] = Enum.sort(export2) + [record3, record4] = + cache |> Cachex.export() |> Enum.sort() # unpack the records touch time entry(modified: modified3) = record3 entry(modified: modified4) = record4 # new modified should be larger than old - assert(modified3 > modified1) - assert(modified4 > modified2) + assert modified3 > modified1 + assert modified4 > modified2 end end diff --git a/test/cachex/actions/transaction_test.exs b/test/cachex/actions/transaction_test.exs index 20ffae64..b42ff7aa 100644 --- a/test/cachex/actions/transaction_test.exs +++ b/test/cachex/actions/transaction_test.exs @@ -24,10 +24,7 @@ defmodule Cachex.Actions.TransactionTest do :timer.sleep(10) # write a key from outside a transaction - incr = Cachex.incr(cache, "key") - - # verify the write was queued after the transaction - assert(incr == {:ok, 2}) + assert Cachex.incr(cache, "key") == 2 end # This test ensures that any errors which occur inside a transaction are caught @@ -43,16 +40,12 @@ defmodule Cachex.Actions.TransactionTest do end) # verify the error was caught - assert(result1 == {:error, "Error message"}) + assert result1 == {:error, "Error message"} # ensure a new transaction executes normally - result2 = - Cachex.transaction(cache, [], fn -> - Cachex.Services.Locksmith.transaction?() - end) - - # verify the results are correct - assert(result2 == {:ok, true}) + assert Cachex.transaction(cache, [], fn -> + Cachex.Services.Locksmith.transaction?() + end) end # This test makes sure that a cache with transactions disabled will automatically @@ -64,19 +57,19 @@ defmodule Cachex.Actions.TransactionTest do cache = TestUtils.create_cache() # retrieve the cache state - state1 = Services.Overseer.retrieve(cache) + state1 = Services.Overseer.lookup(cache) # verify transactions are disabled - assert(cache(state1, :transactions) == false) + assert cache(state1, :transactions) == false # execute a transactions Cachex.transaction(cache, [], & &1) # pull the state back from the cache again - state2 = Services.Overseer.retrieve(cache) + state2 = Services.Overseer.lookup(cache) # verify transactions are now enabled - assert(cache(state2, :transactions) == true) + assert cache(state2, :transactions) == true end # This test verifies that this action is correctly distributed across @@ -88,11 +81,12 @@ defmodule Cachex.Actions.TransactionTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 2 & 3 hash to the same slots - {:ok, result} = Cachex.transaction(cache, [], &:erlang.phash2/1) - {:ok, ^result} = Cachex.transaction(cache, [2, 3], &:erlang.phash2/1) + result1 = Cachex.transaction(cache, [], &:erlang.phash2/1) + result2 = Cachex.transaction(cache, [2, 3], &:erlang.phash2/1) # check the result phashed ok - assert(result > 0 && is_integer(result)) + assert result1 > 0 && is_integer(result1) + assert result1 == result2 end # This test verifies that all keys in a put_many/3 must hash to the @@ -102,10 +96,7 @@ defmodule Cachex.Actions.TransactionTest do # create a new cache cluster for cleaning {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) - # we know that 1 & 3 don't hash to the same slots - transaction = Cachex.transaction(cache, [1, 2], &:erlang.phash2/1) - - # so there should be an error - assert(transaction == {:error, :cross_slot}) + # we know that 1 & 3 don't hash to the same slots, so there should be an error + assert Cachex.transaction(cache, [1, 2], &:erlang.phash2/1) == {:error, :cross_slot} end end diff --git a/test/cachex/actions/ttl_test.exs b/test/cachex/actions/ttl_test.exs index 365be526..4cb8c535 100644 --- a/test/cachex/actions/ttl_test.exs +++ b/test/cachex/actions/ttl_test.exs @@ -9,24 +9,15 @@ defmodule Cachex.Actions.TtlTest do cache = TestUtils.create_cache() # set several keys in the cache - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 10000) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 10000) == :ok - # verify the TTL of both keys - ttl1 = Cachex.ttl(cache, 1) - ttl2 = Cachex.ttl!(cache, 2) - - # verify the TTL of a missing key - ttl3 = Cachex.ttl(cache, 3) - - # the first TTL should be nil - assert(ttl1 == {:ok, nil}) + # verify the TTL the nil keys + assert Cachex.ttl(cache, 1) == nil + assert Cachex.ttl(cache, 3) == nil # the second should be close to 10s - assert_in_delta(ttl2, 10000, 10) - - # the third should return a missing value - assert(ttl3 == {:ok, nil}) + assert_in_delta Cachex.ttl(cache, 2), 10000, 10 end # This test verifies that this action is correctly distributed across @@ -38,15 +29,11 @@ defmodule Cachex.Actions.TtlTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1, expire: 500) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 500) + assert Cachex.put(cache, 1, 1, expire: 500) == :ok + assert Cachex.put(cache, 2, 2, expire: 500) == :ok # check the expiration of each key in the cluster - {:ok, expiration1} = Cachex.ttl(cache, 1) - {:ok, expiration2} = Cachex.ttl(cache, 2) - - # check the delta changed - assert(expiration1 > 450) - assert(expiration2 > 450) + assert Cachex.ttl(cache, 1) > 450 + assert Cachex.ttl(cache, 2) > 450 end end diff --git a/test/cachex/actions/update_test.exs b/test/cachex/actions/update_test.exs index 8cc2c3a7..e031d5dc 100644 --- a/test/cachex/actions/update_test.exs +++ b/test/cachex/actions/update_test.exs @@ -10,36 +10,26 @@ defmodule Cachex.Actions.UpdateTest do cache = TestUtils.create_cache() # set a value with no TTL inside the cache - {:ok, true} = Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 1, 1) == :ok # set a value with a TTL in the cache - {:ok, true} = Cachex.put(cache, 2, 2, expire: 10000) + assert Cachex.put(cache, 2, 2, expire: 10000) == :ok # attempt to update both keys - update1 = Cachex.update(cache, 1, 3) - update2 = Cachex.update(cache, 2, 3) - - # ensure both succeeded - assert(update1 == {:ok, true}) - assert(update2 == {:ok, true}) + assert Cachex.update(cache, 1, 3) + assert Cachex.update(cache, 2, 3) # retrieve the modified keys - value1 = Cachex.get(cache, 1) - value2 = Cachex.get(cache, 2) - - # verify the updates - assert(value1 == {:ok, 3}) - assert(value2 == {:ok, 3}) - - # pull back the TTLs - ttl1 = Cachex.ttl!(cache, 1) - ttl2 = Cachex.ttl!(cache, 2) + assert Cachex.get(cache, 1) == 3 + assert Cachex.get(cache, 2) == 3 # the first TTL should still be unset - assert(ttl1 == nil) + assert Cachex.ttl(cache, 1) == nil # the second should still be set - assert_in_delta(ttl2, 10000, 10) + cache + |> Cachex.ttl(2) + |> assert_in_delta(10000, 10) end # This test just verifies that we successfully return an error when we try to @@ -49,12 +39,8 @@ defmodule Cachex.Actions.UpdateTest do cache = TestUtils.create_cache() # attempt to update a missing key in the cache - update1 = Cachex.update(cache, 1, 3) - update2 = Cachex.update(cache, 2, 3) - - # ensure both failed - assert(update1 == {:ok, false}) - assert(update2 == {:ok, false}) + refute Cachex.update(cache, 1, 3) + refute Cachex.update(cache, 2, 3) end # This test verifies that this action is correctly distributed across @@ -66,19 +52,15 @@ defmodule Cachex.Actions.UpdateTest do {cache, _nodes, _cluster} = TestUtils.create_cache_cluster(2) # we know that 1 & 2 hash to different nodes - {:ok, true} = Cachex.put(cache, 1, 1, expire: 500) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 500) + assert Cachex.put(cache, 1, 1, expire: 500) == :ok + assert Cachex.put(cache, 2, 2, expire: 500) == :ok # run updates against both keys - {:ok, true} = Cachex.update(cache, 1, -1) - {:ok, true} = Cachex.update(cache, 2, -2) + assert Cachex.update(cache, 1, -1) + assert Cachex.update(cache, 2, -2) # try to retrieve both of the set keys - updated1 = Cachex.get(cache, 1) - updated2 = Cachex.get(cache, 2) - - # check the update occurred - assert(updated1 == {:ok, -1}) - assert(updated2 == {:ok, -2}) + assert Cachex.get(cache, 1) == -1 + assert Cachex.get(cache, 2) == -2 end end diff --git a/test/cachex/actions/warm_test.exs b/test/cachex/actions/warm_test.exs index 338f40ee..644afc31 100644 --- a/test/cachex/actions/warm_test.exs +++ b/test/cachex/actions/warm_test.exs @@ -21,20 +21,20 @@ defmodule Cachex.Actions.WarmTest do ) # check that the key was warmed - assert Cachex.get!(cache, 1) == 1 + assert Cachex.get(cache, 1) == 1 # clean out our cache entries - assert Cachex.clear!(cache) == 1 - assert Cachex.get!(cache, 1) == nil + assert Cachex.clear(cache) == 1 + assert Cachex.get(cache, 1) == nil # manually trigger a cache warming of all modules - assert Cachex.warm(cache) == {:ok, [:manual_warmer1]} + assert Cachex.warm(cache) == [:manual_warmer1] # wait for the warming :timer.sleep(50) # check that our key has been put back - assert Cachex.get!(cache, 1) == 1 + assert Cachex.get(cache, 1) == 1 end test "manually warming a cache and awaiting results" do @@ -55,15 +55,15 @@ defmodule Cachex.Actions.WarmTest do ) # check that the key was warmed - assert Cachex.get!(cache, 1) == 1 + assert Cachex.get(cache, 1) == 1 # clean out our cache entries - assert Cachex.clear!(cache) == 1 - assert Cachex.get!(cache, 1) == nil + assert Cachex.clear(cache) == 1 + assert Cachex.get(cache, 1) == nil # manually trigger a cache warming of all modules - assert Cachex.warm(cache, wait: true) == {:ok, [:manual_warmer2]} - assert Cachex.get!(cache, 1) == 1 + assert Cachex.warm(cache, wait: true) == [:manual_warmer2] + assert Cachex.get(cache, 1) == 1 end # This test covers the case where you manually specify a list of modules @@ -87,29 +87,28 @@ defmodule Cachex.Actions.WarmTest do ) # check that the key was warmed - assert Cachex.get!(cache, 1) == 1 + assert Cachex.get(cache, 1) == 1 # clean out our cache entries - assert Cachex.clear!(cache) == 1 - assert Cachex.get!(cache, 1) == nil + assert Cachex.clear(cache) == 1 + assert Cachex.get(cache, 1) == nil # manually trigger a cache warming - assert Cachex.warm(cache, only: []) == {:ok, []} + assert Cachex.warm(cache, only: []) == [] # wait for the warming :timer.sleep(50) # check that our key was never put back - assert Cachex.get!(cache, 1) == nil + assert Cachex.get(cache, 1) == nil # manually trigger a cache warming, specifying our module - assert Cachex.warm(cache, only: [:manual_warmer3]) == - {:ok, [:manual_warmer3]} + assert Cachex.warm(cache, only: [:manual_warmer3]) == [:manual_warmer3] # wait for the warming :timer.sleep(50) # check that our key has been put back - assert Cachex.get!(cache, 1) == 1 + assert Cachex.get(cache, 1) == 1 end end diff --git a/test/cachex/actions_test.exs b/test/cachex/actions_test.exs index 26582668..d765b95c 100644 --- a/test/cachex/actions_test.exs +++ b/test/cachex/actions_test.exs @@ -21,37 +21,28 @@ defmodule Cachex.ActionsTest do cache = TestUtils.create_cache(hooks: [hook]) # retrieve the state - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # write several values - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.put(cache, 2, 2, expire: 1) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.put(cache, 2, 2, expire: 1) == :ok # let the TTL expire :timer.sleep(2) # read back the values from the table - record1 = Cachex.Actions.read(state, 1) - record2 = Cachex.Actions.read(state, 2) - record3 = Cachex.Actions.read(state, 3) + record = Cachex.Actions.read(state, 1) + assert match?(entry(key: 1, value: 1), record) - # the first should find a record - assert(match?(entry(key: 1, value: 1), record1)) - - # the second should expire - assert(record2 == nil) - - # the third is missing - assert(record3 == nil) + # read back missing values from the table + assert Cachex.Actions.read(state, 2) == nil + assert Cachex.Actions.read(state, 3) == nil # we should receive the purge of the second key - assert_receive({{:purge, [[]]}, {:ok, 1}}) + assert_receive {{:purge, [[]]}, 1} # verify if the second key exists - exists1 = Cachex.exists?(cache, 2) - - # it shouldn't exist - assert(exists1 == {:ok, false}) + refute Cachex.exists?(cache, 2) end test "carrying out generic write actions" do @@ -59,116 +50,64 @@ defmodule Cachex.ActionsTest do cache = TestUtils.create_cache() # retrieve the state - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # write some values into the cache - write1 = - Cachex.Actions.write( - state, - entry( - key: "key", - value: "value", - modified: 1 - ) - ) - - # verify the write - assert(write1 == {:ok, true}) - - # retrieve the value - value1 = Cachex.Actions.read(state, "key") + assert Cachex.Actions.write( + state, + entry( + key: "key", + value: "value", + modified: 1 + ) + ) # validate the value - assert( - value1 == - entry( - key: "key", - value: "value", - modified: 1 - ) - ) + assert Cachex.Actions.read(state, "key") == + entry( + key: "key", + value: "value", + modified: 1 + ) # attempt to update some values - update1 = Cachex.Actions.update(state, "key", entry_mod(value: "yek")) - update2 = Cachex.Actions.update(state, "nop", entry_mod(value: "yek")) - - # the first should be ok - assert(update1 == {:ok, true}) - - # the second is missing - assert(update2 == {:ok, false}) - - # retrieve the value - value2 = Cachex.Actions.read(state, "key") + assert Cachex.Actions.update(state, "key", entry_mod(value: "yek")) + refute Cachex.Actions.update(state, "nop", entry_mod(value: "yek")) # validate the update took effect - assert( - value2 == - entry( - key: "key", - value: "yek", - modified: 1 - ) - ) + assert Cachex.Actions.read(state, "key") == + entry( + key: "key", + value: "yek", + modified: 1 + ) end # This test just ensures that we correctly convert return values to either a # :commit Tuple or an :ignore Tuple. We also make sure to verify that the default # behaviour is a :commit Tuple for backwards compatibility. test "formatting commit/ignore return values" do - # define our base Tuples to test against - tuple1 = {:commit, true} - tuple2 = {:ignore, true} - tuple3 = {:error, true} - tuple4 = {:commit, true, []} - - # define our base value - value1 = true - - # format all values - result1 = Cachex.Actions.format_fetch_value(tuple1) - result2 = Cachex.Actions.format_fetch_value(tuple2) - result3 = Cachex.Actions.format_fetch_value(tuple3) - result4 = Cachex.Actions.format_fetch_value(tuple4) - result5 = Cachex.Actions.format_fetch_value(value1) - - # the first three should persist - assert(result1 == tuple1) - assert(result2 == tuple2) - assert(result3 == tuple3) - assert(result4 == tuple4) + # format all values are acceptable as is if they're matching the pattern + assert Cachex.Actions.format_fetch_value({:commit, true}) == {:commit, true} + assert Cachex.Actions.format_fetch_value({:ignore, true}) == {:ignore, true} + assert Cachex.Actions.format_fetch_value({:error, true}) == {:error, true} + assert Cachex.Actions.format_fetch_value({:commit, true, []}) == {:commit, true, []} # the value should be converted to the first - assert(result5 == tuple1) + assert Cachex.Actions.format_fetch_value(true) == {:commit, true} end # Simple test to ensure that commit normalization correctly assigns # options to a commit tuple without, and maintains those with. test "normalizing formatted :commit values" do - # define our base Tuples to test against - tuple1 = {:commit, true} - tuple2 = {:commit, true, []} - - # normalize all values - result1 = Cachex.Actions.normalize_commit(tuple1) - result2 = Cachex.Actions.normalize_commit(tuple2) - - # both should have options - assert(result1 == tuple2) - assert(result2 == tuple2) + assert Cachex.Actions.normalize_commit({:commit, true}) == {:commit, true, []} + assert Cachex.Actions.normalize_commit({:commit, true, []}) == {:commit, true, []} end # This test just provides basic coverage of the write_op function, by using # a prior value to determine the correct Action to use to write a value. test "retrieving a module name to write with" do - # ask for some modules - result1 = Cachex.Actions.write_op(nil) - result2 = Cachex.Actions.write_op("value") - - # the first should be Set actions - assert(result1 == :put) - - # the second should be an Update - assert(result2 == :update) + assert Cachex.Actions.write_op(nil) == :put + assert Cachex.Actions.write_op("value") == :update end end diff --git a/test/cachex/hook_test.exs b/test/cachex/hook_test.exs index fca2b5fa..c55ec4f7 100644 --- a/test/cachex/hook_test.exs +++ b/test/cachex/hook_test.exs @@ -28,7 +28,7 @@ defmodule Cachex.HookTest do ) # turn the cache into a cache state - cache1 = Services.Overseer.retrieve(cache) + cache1 = Services.Overseer.lookup(cache) # compare the order and all hooks listed assert [ @@ -55,7 +55,7 @@ defmodule Cachex.HookTest do ) # turn the cache into a cache state - cache1 = Services.Overseer.retrieve(cache) + cache1 = Services.Overseer.lookup(cache) # locate each of the hooks (as they're different types) locate1 = Cachex.Hook.locate(cache1, :concat_hook_1) @@ -81,7 +81,7 @@ defmodule Cachex.HookTest do cache = TestUtils.create_cache(hooks: [ExecuteHook.create()]) # find the hook (with the populated runtime process identifier) - cache(hooks: hooks(post: [hook])) = Services.Overseer.get(cache) + cache(hooks: hooks(post: [hook])) = Services.Overseer.lookup(cache) # notify and fetch callers in order to send them back to this parent process Services.Informant.notify([hook], {:exec, fn -> Process.get(:"$callers") end}, nil) diff --git a/test/cachex/limit/accessed_test.exs b/test/cachex/limit/accessed_test.exs index dad13f85..7044496c 100644 --- a/test/cachex/limit/accessed_test.exs +++ b/test/cachex/limit/accessed_test.exs @@ -9,7 +9,7 @@ defmodule Cachex.Limit.AccessedTest do cache = TestUtils.create_cache(hooks: [hook(module: Cachex.Limit.Accessed)]) # create a new key to check against - {:ok, true} = Cachex.put(cache, "key", 1) + assert Cachex.put(cache, "key", 1) == :ok # fetch the raw modification time of the cache entry entry(modified: modified1) = Cachex.inspect!(cache, {:entry, "key"}) @@ -18,7 +18,7 @@ defmodule Cachex.Limit.AccessedTest do :timer.sleep(50) # fetch back the key again - {:ok, 1} = Cachex.get(cache, "key") + assert Cachex.get(cache, "key") == 1 # the modification time should update... TestUtils.poll(250, true, fn -> diff --git a/test/cachex/limit/evented_test.exs b/test/cachex/limit/evented_test.exs index 0a678b8c..8fa3daae 100644 --- a/test/cachex/limit/evented_test.exs +++ b/test/cachex/limit/evented_test.exs @@ -9,18 +9,15 @@ defmodule Cachex.Limit.EventedTest do cache = TestUtils.create_cache() # retrieve the cache state - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # add 5000 keys to the cache for x <- 1..5000 do - {:ok, true} = Cachex.put(state, x, x) + assert Cachex.put(state, x, x) == :ok end - # retrieve the cache size - count = Cachex.size!(state) - # make sure all keys are there - assert(count == 5000) + assert Cachex.size(state) == 5000 end # This test ensures that a cache will cap caches at a given limit by trimming @@ -51,22 +48,19 @@ defmodule Cachex.Limit.EventedTest do ) # retrieve the cache state - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # add 100 keys to the cache for x <- 1..100 do # add the entry to the cache - {:ok, true} = Cachex.put(state, x, x) + assert Cachex.put(state, x, x) == :ok # tick to make sure each has a new touch time :timer.sleep(1) end - # retrieve the cache size - size1 = Cachex.size!(cache) - # verify the cache size - assert(size1 == 100) + assert Cachex.size(cache) == 100 # flush all existing hook events TestUtils.flush() @@ -77,18 +71,15 @@ defmodule Cachex.Limit.EventedTest do {:ignore, nil} end) - # retrieve the cache size - size2 = Cachex.size!(cache) - # verify the cache size - assert(size2 == 100) + assert Cachex.size(cache) == 100 # add a new key to the cache to trigger evictions - {:ok, true} = Cachex.put(state, 101, 101) + assert Cachex.put(state, 101, 101) == :ok # verify the cache shrinks to 25% TestUtils.poll(250, 25, fn -> - Cachex.size!(state) + Cachex.size(state) end) # our validation step @@ -96,10 +87,7 @@ defmodule Cachex.Limit.EventedTest do # iterate all keys in the range for x <- range do # retrieve whether the key exists - exists = Cachex."exists?!"(state, x) - - # verify whether it exists - assert(exists == expected) + assert Cachex.exists?(state, x) == expected end end @@ -109,8 +97,8 @@ defmodule Cachex.Limit.EventedTest do # verify the latest 25 are retained validate.(77..101, true) - # finally, verify hooks are notified - assert_receive({{:clear, [[]]}, {:ok, 76}}) + # finally, verify hooks are notified via the pruned counter + assert_receive {{:prune, [100, [buffer: 25, reclaim: 0.75]]}, 76} # retrieve the policy hook definition cache(hooks: hooks(post: [hook1 | _])) = state diff --git a/test/cachex/limit/scheduled_test.exs b/test/cachex/limit/scheduled_test.exs index 114841cf..65b9963a 100644 --- a/test/cachex/limit/scheduled_test.exs +++ b/test/cachex/limit/scheduled_test.exs @@ -9,18 +9,15 @@ defmodule Cachex.Limit.ScheduledTest do cache = TestUtils.create_cache() # retrieve the cache state - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # add 5000 keys to the cache for x <- 1..5000 do - {:ok, true} = Cachex.put(state, x, x) + assert Cachex.put(state, x, x) == :ok end - # retrieve the cache size - count = Cachex.size!(state) - # make sure all keys are there - assert(count == 5000) + assert Cachex.size(state) == 5000 end # This test ensures that a cache will cap caches at a given limit by trimming @@ -54,32 +51,29 @@ defmodule Cachex.Limit.ScheduledTest do ) # retrieve the cache state - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # add 1000 keys to the cache for x <- 1..100 do # add the entry to the cache - {:ok, true} = Cachex.put(state, x, x) + assert Cachex.put(state, x, x) # tick to make sure each has a new touch time :timer.sleep(1) end - # retrieve the cache size - size1 = Cachex.size!(cache) - # verify the cache size - assert(size1 == 100) + assert Cachex.size(cache) == 100 # flush all existing hook events TestUtils.flush() # add a new key to the cache to trigger evictions - {:ok, true} = Cachex.put(state, 101, 101) + assert Cachex.put(state, 101, 101) == :ok # verify the cache shrinks to 25% TestUtils.poll(250, 25, fn -> - Cachex.size!(state) + Cachex.size(state) end) # our validation step @@ -87,10 +81,7 @@ defmodule Cachex.Limit.ScheduledTest do # iterate all keys in the range for x <- range do # retrieve whether the key exists - exists = Cachex."exists?!"(state, x) - - # verify whether it exists - assert(exists == expected) + assert Cachex.exists?(state, x) == expected end end @@ -100,8 +91,8 @@ defmodule Cachex.Limit.ScheduledTest do # verify the latest 25 are retained validate.(77..101, true) - # finally, verify hooks are notified - assert_receive({{:clear, [[]]}, {:ok, 76}}) + # finally, verify hooks are notified via the pruned counter + assert_receive {{:prune, [100, [buffer: 25, reclaim: 0.75]]}, 76} # retrieve the policy hook definition cache(hooks: hooks(post: [hook1 | _])) = state diff --git a/test/cachex/router/jump_test.exs b/test/cachex/router/jump_test.exs index 3588f6a2..ebbc6ac1 100644 --- a/test/cachex/router/jump_test.exs +++ b/test/cachex/router/jump_test.exs @@ -9,7 +9,7 @@ defmodule Cachex.Router.JumpTest do ) # convert the name to a cache and sort - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) nodes = Enum.sort(nodes) # fetch the router state after initialize @@ -34,7 +34,7 @@ defmodule Cachex.Router.JumpTest do # create a test cache and fetch back cache = TestUtils.create_cache(router: router) - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # fetch the router state after initialize cache(router: router(state: state)) = cache diff --git a/test/cachex/router/local_test.exs b/test/cachex/router/local_test.exs index cd182f55..20ab683c 100644 --- a/test/cachex/router/local_test.exs +++ b/test/cachex/router/local_test.exs @@ -6,7 +6,7 @@ defmodule Cachex.Router.LocalTest do cache = TestUtils.create_cache(router: Cachex.Router.Local) # convert the name to a cache and sort - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # fetch the router state after initialize cache(router: router(state: state)) = cache diff --git a/test/cachex/router/mod_test.exs b/test/cachex/router/mod_test.exs index a5d39373..4bb5adde 100644 --- a/test/cachex/router/mod_test.exs +++ b/test/cachex/router/mod_test.exs @@ -9,7 +9,7 @@ defmodule Cachex.Router.ModTest do ) # convert the name to a cache and sort - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) nodes = Enum.sort(nodes) # fetch the router state after initialize @@ -34,7 +34,7 @@ defmodule Cachex.Router.ModTest do # create a test cache and fetch back cache = TestUtils.create_cache(router: router) - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # fetch the router state after initialize cache(router: router(state: state)) = cache diff --git a/test/cachex/router/ring_test.exs b/test/cachex/router/ring_test.exs index 067b8d68..390c820a 100644 --- a/test/cachex/router/ring_test.exs +++ b/test/cachex/router/ring_test.exs @@ -9,7 +9,7 @@ defmodule Cachex.Router.RingTest do ) # convert the name to a cache and sort - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # fetch the router state after initialize cache(router: router(state: state)) = cache @@ -34,7 +34,7 @@ defmodule Cachex.Router.RingTest do ) # convert the name to a cache and sort - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # fetch the router state after initialize cache(router: router(state: state)) = cache @@ -60,7 +60,7 @@ defmodule Cachex.Router.RingTest do ) # convert the name to a cache and sort - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # pull back the routable nodes from router {:ok, routable1} = Cachex.Router.nodes(cache) @@ -115,7 +115,7 @@ defmodule Cachex.Router.RingTest do ) # convert the name to a cache and sort - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # verify that only the manage was attached to the ring assert Cachex.Router.nodes(cache) == {:ok, [node()]} @@ -139,7 +139,7 @@ defmodule Cachex.Router.RingTest do ) # convert the name to a cache and sort - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # verify that only the manager was attached to the ring assert Cachex.Router.nodes(cache) == {:ok, [node()]} diff --git a/test/cachex/services/courier_test.exs b/test/cachex/services/courier_test.exs index 872b79b9..b871a997 100644 --- a/test/cachex/services/courier_test.exs +++ b/test/cachex/services/courier_test.exs @@ -4,7 +4,7 @@ defmodule Cachex.Services.CourierTest do test "dispatching tasks" do # start a new cache cache = TestUtils.create_cache() - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # dispatch an arbitrary task result = @@ -16,10 +16,7 @@ defmodule Cachex.Services.CourierTest do assert result == {:commit, "my_value"} # check the key was placed in the table - retrieved = Cachex.get(cache, "my_key") - - # the retrieved value should match - assert retrieved == {:ok, "my_value"} + assert Cachex.get(cache, "my_key") == "my_value" end test "dispatching tasks from multiple processes" do @@ -34,7 +31,7 @@ defmodule Cachex.Services.CourierTest do # start a new cache cache = TestUtils.create_cache() - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) parent = self() # dispatch an arbitrary task from the agent process @@ -43,25 +40,19 @@ defmodule Cachex.Services.CourierTest do end) # dispatch an arbitrary task from the current process - result = Services.Courier.dispatch(cache, "my_key", task) - - # check the returned value with the options set - assert result == {:commit, "my_value"} + assert Services.Courier.dispatch(cache, "my_key", task) == {:commit, "my_value"} # check the forwarded task completed (no options) - assert_receive({:ok, "my_value"}) + assert_receive "my_value" # check the key was placed in the table - retrieved = Cachex.get(cache, "my_key") - - # the retrieved value should match - assert retrieved == {:ok, "my_value"} + assert Cachex.get(cache, "my_key") == "my_value" end test "gracefully handling crashes inside tasks" do # start a new cache cache = TestUtils.create_cache() - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # dispatch an arbitrary task result = @@ -77,7 +68,7 @@ defmodule Cachex.Services.CourierTest do test "recovering from failed tasks" do # start a new cache cache = TestUtils.create_cache() - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # kill in flight task parent = diff --git a/test/cachex/services/informant_test.exs b/test/cachex/services/informant_test.exs index 869f4022..9fc5e577 100644 --- a/test/cachex/services/informant_test.exs +++ b/test/cachex/services/informant_test.exs @@ -34,9 +34,9 @@ defmodule Cachex.Services.InformantTest do cache3 = TestUtils.create_cache(hooks: [hook3]) # grab a state instance for the broadcast - state1 = Services.Overseer.retrieve(cache1) - state2 = Services.Overseer.retrieve(cache2) - state3 = Services.Overseer.retrieve(cache3) + state1 = Services.Overseer.lookup(cache1) + state2 = Services.Overseer.lookup(cache2) + state3 = Services.Overseer.lookup(cache3) # broadcast using the cache name Services.Informant.broadcast(state1, {:action, []}, :result) @@ -96,11 +96,11 @@ defmodule Cachex.Services.InformantTest do cache5 = TestUtils.create_cache(hooks: hook5) # update our hooks from the caches - cache(hooks: hooks(pre: [hook1])) = Services.Overseer.retrieve(cache1) - cache(hooks: hooks(post: [hook2])) = Services.Overseer.retrieve(cache2) - cache(hooks: hooks(post: [hook3])) = Services.Overseer.retrieve(cache3) - cache(hooks: hooks(post: [hook4])) = Services.Overseer.retrieve(cache4) - cache(hooks: hooks(pre: [hook5])) = Services.Overseer.retrieve(cache5) + cache(hooks: hooks(pre: [hook1])) = Services.Overseer.lookup(cache1) + cache(hooks: hooks(post: [hook2])) = Services.Overseer.lookup(cache2) + cache(hooks: hooks(post: [hook3])) = Services.Overseer.lookup(cache3) + cache(hooks: hooks(post: [hook4])) = Services.Overseer.lookup(cache4) + cache(hooks: hooks(pre: [hook5])) = Services.Overseer.lookup(cache5) # uninitialized hooks shouldn't emit Services.Informant.notify([hook6], {:action, []}, :result) diff --git a/test/cachex/services/janitor_test.exs b/test/cachex/services/janitor_test.exs index 459eb840..8bd068a1 100644 --- a/test/cachex/services/janitor_test.exs +++ b/test/cachex/services/janitor_test.exs @@ -20,34 +20,16 @@ defmodule Cachex.Services.JanitorTest do state2 = cache(expiration: expiration(lazy: false)) # expired combination regardless of state - result1 = - Services.Janitor.expired?(entry(modified: modified1, expiration: expiration1)) + assert Services.Janitor.expired?(entry(modified: modified1, expiration: expiration1)) # unexpired combination regardless of state - result2 = - Services.Janitor.expired?(entry(modified: modified2, expiration: expiration2)) + refute Services.Janitor.expired?(entry(modified: modified2, expiration: expiration2)) # expired combination with state enabled - result3 = - Services.Janitor.expired?( - state1, - entry(modified: modified1, expiration: expiration1) - ) + assert Services.Janitor.expired?(state1, entry(modified: modified1, expiration: expiration1)) # expired combination with state disabled - result4 = - Services.Janitor.expired?( - state2, - entry(modified: modified1, expiration: expiration1) - ) - - # only the first and third should have expired - assert(result1) - assert(result3) - - # the second and fourth should not have - refute(result2) - refute(result4) + refute Services.Janitor.expired?(state2, entry(modified: modified1, expiration: expiration1)) end # The Janitor process can run on a schedule too, to automatically purge records. @@ -71,44 +53,38 @@ defmodule Cachex.Services.JanitorTest do expiration: expiration(interval: ttl_interval) ) - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # add a new cache entry - {:ok, true} = Cachex.put(cache, "key", "value", expire: ttl_value) - - # check that the key exists - exists1 = Cachex.exists?(cache, "key") + assert Cachex.put(cache, "key", "value", expire: ttl_value) == :ok # before the schedule, the key should exist - assert(exists1 == {:ok, true}) + assert Cachex.exists?(cache, "key") # wait for the schedule :timer.sleep(ttl_wait) - # check that the key exists - exists2 = Cachex.exists?(cache, "key") - # the key should have been removed - assert(exists2 == {:ok, false}) + refute Cachex.exists?(cache, "key") # retrieve the metadata - {:ok, metadata1} = Services.Janitor.last_run(cache) + metadata1 = Services.Janitor.last_run(cache) # verify the count was updated - assert(metadata1[:count] == 1) + assert metadata1[:count] == 1 # verify the duration is valid - assert(is_integer(metadata1[:duration])) + assert is_integer(metadata1[:duration]) # windows will round to nearest millis (0) - assert(metadata1[:duration] >= 0) + assert metadata1[:duration] >= 0 # verify the start time was set - assert(is_integer(metadata1[:started])) - assert(metadata1[:started] > 0) - assert(metadata1[:started] <= :os.system_time(:milli_seconds)) + assert is_integer(metadata1[:started]) + assert metadata1[:started] > 0 + assert metadata1[:started] <= :os.system_time(:milli_seconds) # ensure we receive(d) the hook notification - assert_receive({{:purge, [[{:local, true}]]}, {:ok, 1}}) + assert_receive {{:purge, [[{:local, true}]]}, 1} end end diff --git a/test/cachex/services/locksmith_test.exs b/test/cachex/services/locksmith_test.exs index a04e1e6f..db00fb3c 100644 --- a/test/cachex/services/locksmith_test.exs +++ b/test/cachex/services/locksmith_test.exs @@ -9,7 +9,7 @@ defmodule Cachex.Services.LocksmithTest do cache = TestUtils.create_cache() # fetch the cache state - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # check transaction status from inside of a transaction transaction1 = @@ -35,8 +35,8 @@ defmodule Cachex.Services.LocksmithTest do cache2 = TestUtils.create_cache(transactions: false) # fetch the states for the caches - state1 = Services.Overseer.retrieve(cache1) - state2 = Services.Overseer.retrieve(cache2) + state1 = Services.Overseer.lookup(cache1) + state2 = Services.Overseer.lookup(cache2) # our write action write = &Services.Locksmith.transaction?/0 @@ -63,8 +63,8 @@ defmodule Cachex.Services.LocksmithTest do cache2 = TestUtils.create_cache(transactions: true) # fetch the states for the caches - state1 = Services.Overseer.retrieve(cache1) - state2 = Services.Overseer.retrieve(cache2) + state1 = Services.Overseer.lookup(cache1) + state2 = Services.Overseer.lookup(cache2) # our transaction actions - this will lock the key "key" in both caches for # 50ms before incrementing the same key by 1. @@ -116,7 +116,7 @@ defmodule Cachex.Services.LocksmithTest do cache = TestUtils.create_cache(transactions: true) # retrieve the state for our cache - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # execute a crashing transaction result = @@ -136,7 +136,7 @@ defmodule Cachex.Services.LocksmithTest do cache = TestUtils.create_cache() # retrieve the state for our cache - state = Services.Overseer.retrieve(cache) + state = Services.Overseer.lookup(cache) # lock some keys in the cache true = Services.Locksmith.lock(state, ["key1", "key2"]) diff --git a/test/cachex/services/overseer_test.exs b/test/cachex/services/overseer_test.exs index 238bc8c4..ba94703a 100644 --- a/test/cachex/services/overseer_test.exs +++ b/test/cachex/services/overseer_test.exs @@ -48,14 +48,14 @@ defmodule Cachex.OverseerTest do Services.Overseer.register(name, state) # ensure that the state comes back - assert(Services.Overseer.get(state) === state) - assert(Services.Overseer.get(name) === state) + assert(Services.Overseer.lookup(state) === state) + assert(Services.Overseer.lookup(name) === state) # remove our state from the table Services.Overseer.unregister(name) # ensure the state is gone - assert(Services.Overseer.get(name) == nil) + assert(Services.Overseer.lookup(name) == nil) end # Covers the retrieval of a cache state from inside the table. We just have to @@ -72,7 +72,7 @@ defmodule Cachex.OverseerTest do Services.Overseer.register(name, state) # pull back the state from the table - result = Services.Overseer.retrieve(name) + result = Services.Overseer.lookup(name) # ensure nothing has changed assert(result == state) @@ -91,7 +91,7 @@ defmodule Cachex.OverseerTest do name = TestUtils.create_cache(hooks: hook) # retrieve our state - cache(expiration: expiration) = state = Services.Overseer.retrieve(name) + cache(expiration: expiration) = state = Services.Overseer.lookup(name) # store our updated states update1 = cache(state, expiration: expiration(expiration, default: 5)) @@ -118,7 +118,7 @@ defmodule Cachex.OverseerTest do :timer.sleep(50) # pull back the state from the table - cache(expiration: expiration) = Services.Overseer.retrieve(name) + cache(expiration: expiration) = Services.Overseer.lookup(name) # ensure the last call is the new value assert(expiration(expiration, :default) == 3) diff --git a/test/cachex/services/steward_test.exs b/test/cachex/services/steward_test.exs index 9ef0a29b..46403ff3 100644 --- a/test/cachex/services/steward_test.exs +++ b/test/cachex/services/steward_test.exs @@ -14,7 +14,7 @@ defmodule Cachex.Services.StewardTest do # start a new cache using our forwarded hook cache = TestUtils.create_cache(hooks: [hook]) - cache = Services.Overseer.retrieve(cache) + cache = Services.Overseer.lookup(cache) # the provisioned value should match assert_receive({:cache, ^cache}) diff --git a/test/cachex/services_test.exs b/test/cachex/services_test.exs index 5d77c239..8e5aa7e9 100644 --- a/test/cachex/services_test.exs +++ b/test/cachex/services_test.exs @@ -11,7 +11,7 @@ defmodule Cachex.ServicesTest do test "generating default cache specifications" do # generate the test cache state name = TestUtils.create_cache() - cache = Services.Overseer.retrieve(name) + cache = Services.Overseer.lookup(name) # validate the services assert [ @@ -30,7 +30,7 @@ defmodule Cachex.ServicesTest do test "generating cache specifications with routing" do # generate the test cache state using an async router name = TestUtils.create_cache(router: Cachex.Router.Ring) - cache = Services.Overseer.retrieve(name) + cache = Services.Overseer.lookup(name) # validate the services assert [ @@ -54,7 +54,7 @@ defmodule Cachex.ServicesTest do test "skipping cache janitor specifications" do # generate the test cache state with the Janitor disabled name = TestUtils.create_cache(expiration: expiration(interval: nil)) - cache = Services.Overseer.retrieve(name) + cache = Services.Overseer.lookup(name) # validate the services assert [ @@ -72,7 +72,7 @@ defmodule Cachex.ServicesTest do test "locating running services" do # generate the test cache state with the Janitor disabled name = TestUtils.create_cache(expiration: expiration(interval: nil)) - cache = Services.Overseer.retrieve(name) + cache = Services.Overseer.lookup(name) # validate the service locations assert Services.locate(cache, Services.Courier) != nil diff --git a/test/cachex/spec_test.exs b/test/cachex/spec_test.exs index f01075ee..c4788403 100644 --- a/test/cachex/spec_test.exs +++ b/test/cachex/spec_test.exs @@ -20,7 +20,7 @@ defmodule Cachex.SpecTest do assert const(:local) == [local: true] assert const(:notify_false) == [notify: false] assert const(:purge_override_call) == {:purge, [[]]} - assert const(:purge_override_result) == {:ok, 1} + assert const(:purge_override_result) == 1 assert const(:purge_override) == [ via: const(:purge_override_call), @@ -60,8 +60,8 @@ defmodule Cachex.SpecTest do entry(modified: modified2, key: key) = entry_now(key: "key") assert key == "key" - assert_in_delta(modified1, :os.system_time(1000), 5) - assert_in_delta(modified2, :os.system_time(1000), 5) + assert_in_delta modified1, :os.system_time(1000), 5 + assert_in_delta modified2, :os.system_time(1000), 5 end test "name generation for components" do @@ -108,7 +108,7 @@ defmodule Cachex.SpecTest do millis = (mega * 1_000_000 + seconds) * 1000 + div(ms, 1000) # check they're the same (with an error bound of 2ms) - assert_in_delta(now(), millis, 2) + assert_in_delta now(), millis, 2 end test "wrapping values inside tagged Tuples", diff --git a/test/cachex/stats_test.exs b/test/cachex/stats_test.exs index 1e6b8d88..bdbccb20 100644 --- a/test/cachex/stats_test.exs +++ b/test/cachex/stats_test.exs @@ -17,27 +17,22 @@ defmodule Cachex.StatsTest do # set a few values in the cache for i <- 0..4 do - {:ok, true} = Cachex.put(cache, i, i) + assert Cachex.put(cache, i, i) == :ok end # clear the cache values - {:ok, 5} = Cachex.clear(cache) - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.clear(cache) == 5 # verify the statistics - assert( - stats == %{ - operations: 6, - evictions: 5, - writes: 5, - calls: %{ - clear: 1, - put: 5 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 6, + evictions: 5, + writes: 5, + calls: %{ + clear: 1, + put: 5 + } + } end # This test ensures that delete actions are correctly registered. We increment @@ -54,28 +49,23 @@ defmodule Cachex.StatsTest do # set a few values in the cache for i <- 0..1 do - {:ok, true} = Cachex.put(cache, i, i) + assert Cachex.put(cache, i, i) == :ok end # delete our cache values - {:ok, true} = Cachex.del(cache, 0) - {:ok, true} = Cachex.del(cache, 1) - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.del(cache, 0) == :ok + assert Cachex.del(cache, 1) == :ok # verify the statistics - assert( - stats == %{ - operations: 4, - evictions: 2, - writes: 2, - calls: %{ - del: 2, - put: 2 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 4, + evictions: 2, + writes: 2, + calls: %{ + del: 2, + put: 2 + } + } end # This test verifies that exists actions correctly increment the necessary keys @@ -91,30 +81,25 @@ defmodule Cachex.StatsTest do ) # set a value in the cache - {:ok, true} = Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 1, 1) == :ok # check for a couple of keys - {:ok, true} = Cachex.exists?(cache, 1) - {:ok, false} = Cachex.exists?(cache, 2) - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.exists?(cache, 1) + refute Cachex.exists?(cache, 2) # verify the statistics - assert( - stats == %{ - operations: 3, - writes: 1, - hits: 1, - misses: 1, - hit_rate: 50.0, - miss_rate: 50.0, - calls: %{ - exists?: 2, - put: 1 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 3, + writes: 1, + hits: 1, + misses: 1, + hit_rate: 50.0, + miss_rate: 50.0, + calls: %{ + exists?: 2, + put: 1 + } + } end # Retrieving a key will increment the hit/miss counts @@ -129,30 +114,25 @@ defmodule Cachex.StatsTest do ) # set a value in the cache - {:ok, true} = Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 1, 1) == :ok # check for a couple of keys - {:ok, 1} = Cachex.get(cache, 1) - {:ok, nil} = Cachex.get(cache, 2) - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.get(cache, 1) == 1 + assert Cachex.get(cache, 2) == nil # verify the statistics - assert( - stats == %{ - operations: 3, - writes: 1, - hits: 1, - misses: 1, - hit_rate: 50.0, - miss_rate: 50.0, - calls: %{ - get: 2, - put: 1 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 3, + writes: 1, + hits: 1, + misses: 1, + hit_rate: 50.0, + miss_rate: 50.0, + calls: %{ + get: 2, + put: 1 + } + } end # Retrieving a key will increment the hit/miss/load counts based on whether the @@ -168,32 +148,27 @@ defmodule Cachex.StatsTest do ) # set a value in the cache - {:ok, true} = Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 1, 1) == :ok # fetch an existing value - {:ok, 1} = Cachex.fetch(cache, 1, fn _ -> {:commit, "na"} end) - {:commit, "na"} = Cachex.fetch(cache, 2, fn _ -> {:commit, "na"} end) - {:ignore, "na"} = Cachex.fetch(cache, 3, fn _ -> {:ignore, "na"} end) - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.fetch(cache, 1, fn _ -> {:commit, "na"} end) == 1 + assert Cachex.fetch(cache, 2, fn _ -> {:commit, "na"} end) == {:commit, "na"} + assert Cachex.fetch(cache, 3, fn _ -> {:ignore, "na"} end) == {:ignore, "na"} # verify the statistics - assert( - stats == %{ - operations: 4, - fetches: 2, - writes: 2, - hits: 1, - hit_rate: 1 / 3 * 100, - misses: 2, - miss_rate: 1 / 3 * 2 * 100, - calls: %{ - fetch: 3, - put: 1 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 4, + fetches: 2, + writes: 2, + hits: 1, + hit_rate: 1 / 3 * 100, + misses: 2, + miss_rate: 1 / 3 * 2 * 100, + calls: %{ + fetch: 3, + put: 1 + } + } end # These actions can update if the key exists, or set if the key does not exist. @@ -210,28 +185,23 @@ defmodule Cachex.StatsTest do ) # incr values in the cache - {:ok, 5} = Cachex.incr(cache, 1, 3, default: 2) - {:ok, 6} = Cachex.incr(cache, 1) + assert Cachex.incr(cache, 1, 3, default: 2) == 5 + assert Cachex.incr(cache, 1) == 6 # decr values in the cache - {:ok, -5} = Cachex.decr(cache, 2, 3, default: -2) - {:ok, -6} = Cachex.decr(cache, 2) - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.decr(cache, 2, 3, default: -2) == -5 + assert Cachex.decr(cache, 2) == -6 # verify the statistics - assert( - stats == %{ - operations: 4, - updates: 2, - writes: 2, - calls: %{ - incr: 2, - decr: 2 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 4, + updates: 2, + writes: 2, + calls: %{ + incr: 2, + decr: 2 + } + } end test "registering invoke actions" do @@ -259,37 +229,32 @@ defmodule Cachex.StatsTest do ) # put the base value - {:ok, true} = Cachex.put(cache, "list", [1, 2, 3]) + assert Cachex.put(cache, "list", [1, 2, 3]) == :ok # run each command - {:ok, 3} = Cachex.invoke(cache, :last, "list") - {:ok, 1} = Cachex.invoke(cache, :lpop, "list") - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.invoke(cache, :last, "list") == 3 + assert Cachex.invoke(cache, :lpop, "list") == 1 # verify the statistics - assert( - stats == %{ - operations: 6, - updates: 1, - writes: 1, - hits: 2, - hit_rate: 100.0, - misses: 0, - miss_rate: 0.0, - invocations: %{ - last: 1, - lpop: 1 - }, - calls: %{ - get: 2, - invoke: 2, - put: 1, - update: 1 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 6, + updates: 1, + writes: 1, + hits: 2, + hit_rate: 100.0, + misses: 0, + miss_rate: 0.0, + invocations: %{ + last: 1, + lpop: 1 + }, + calls: %{ + get: 2, + invoke: 2, + put: 1, + update: 1 + } + } end # Very similar to the clear test above, with the same behaviour except for @@ -306,31 +271,26 @@ defmodule Cachex.StatsTest do # set a few values in the cache for i <- 0..4 do - {:ok, true} = Cachex.put(cache, i, i, expire: 1) + assert Cachex.put(cache, i, i, expire: 1) == :ok end # ensure purge :timer.sleep(5) # purge the cache values - {:ok, 5} = Cachex.purge(cache) - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.purge(cache) == 5 # verify the statistics - assert( - stats == %{ - expirations: 5, - operations: 6, - evictions: 5, - writes: 5, - calls: %{ - purge: 1, - put: 5 - } - } - ) + assert stats_no_meta(cache) == %{ + expirations: 5, + operations: 6, + evictions: 5, + writes: 5, + calls: %{ + purge: 1, + put: 5 + } + } end # This test ensures that a successful write will increment the setCount in the @@ -347,22 +307,17 @@ defmodule Cachex.StatsTest do # set a few values in the cache for i <- 0..4 do - {:ok, true} = Cachex.put(cache, i, i) + assert Cachex.put(cache, i, i) == :ok end - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) - # verify the statistics - assert( - stats == %{ - operations: 5, - writes: 5, - calls: %{ - put: 5 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 5, + writes: 5, + calls: %{ + put: 5 + } + } end # This operates in the same way as the test cases above, but verifies that @@ -377,28 +332,22 @@ defmodule Cachex.StatsTest do ) # set a few values in the cache - {:ok, true} = - Cachex.put_many(cache, [ - {1, 1}, - {2, 2}, - {3, 3}, - {4, 4}, - {5, 5} - ]) - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.put_many(cache, [ + {1, 1}, + {2, 2}, + {3, 3}, + {4, 4}, + {5, 5} + ]) == :ok # verify the statistics - assert( - stats == %{ - operations: 1, - writes: 5, - calls: %{ - put_many: 1 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 1, + writes: 5, + calls: %{ + put_many: 1 + } + } end # This test verifies the take action and the incremenation of the necessary keys. @@ -415,31 +364,26 @@ defmodule Cachex.StatsTest do ) # set a value in the cache - {:ok, true} = Cachex.put(cache, 1, 1) + assert Cachex.put(cache, 1, 1) == :ok # delete our cache values - {:ok, 1} = Cachex.take(cache, 1) - {:ok, nil} = Cachex.take(cache, 2) - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.take(cache, 1) == 1 + assert Cachex.take(cache, 2) == nil # verify the statistics - assert( - stats == %{ - operations: 3, - evictions: 1, - writes: 1, - hits: 1, - hit_rate: 50.0, - misses: 1, - miss_rate: 50.0, - calls: %{ - put: 1, - take: 2 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 3, + evictions: 1, + writes: 1, + hits: 1, + hit_rate: 50.0, + misses: 1, + miss_rate: 50.0, + calls: %{ + put: 1, + take: 2 + } + } end # This test verifies the update actions and the incremenation of the necessary keys. @@ -453,30 +397,25 @@ defmodule Cachex.StatsTest do ) # set a value in the cache - {:ok, true} = Cachex.put(cache, 1, 1) - {:ok, true} = Cachex.touch(cache, 1) - - # retrieve the statistics - {:ok, stats} = stats_no_meta(cache) + assert Cachex.put(cache, 1, 1) == :ok + assert Cachex.touch(cache, 1) # verify the statistics - assert( - stats == %{ - operations: 2, - updates: 1, - writes: 1, - calls: %{ - put: 1, - touch: 1 - } - } - ) + assert stats_no_meta(cache) == %{ + operations: 2, + updates: 1, + writes: 1, + calls: %{ + put: 1, + touch: 1 + } + } end # Retrieves stats with no :meta field defp stats_no_meta(cache) do - with {:ok, stats} <- Cachex.stats(cache) do - {:ok, Map.delete(stats, :meta)} + with %{} = stats <- Cachex.stats(cache) do + Map.delete(stats, :meta) end end end diff --git a/test/cachex/warmer_test.exs b/test/cachex/warmer_test.exs index 9e1dc8e8..cfe76989 100644 --- a/test/cachex/warmer_test.exs +++ b/test/cachex/warmer_test.exs @@ -11,7 +11,7 @@ defmodule Cachex.WarmerTest do cache = TestUtils.create_cache(warmers: [warmer(module: :basic_warmer)]) # check that the key was warmed - assert Cachex.get!(cache, 1) == 1 + assert Cachex.get(cache, 1) == 1 end test "warmers with long running tasks" do @@ -27,7 +27,7 @@ defmodule Cachex.WarmerTest do TestUtils.create_cache(warmers: [warmer(module: :long_running_warmer)]) # check that the key was warmed - assert Cachex.get!(cache, 1) == 1 + assert Cachex.get(cache, 1) == 1 end test "warmers with long running async tasks" do @@ -51,8 +51,8 @@ defmodule Cachex.WarmerTest do cache = TestUtils.create_cache(warmers: [warmer(module: :long_running_async_warmer)]) # check that the keys were warmed - assert Cachex.get!(cache, 1) == 1 - assert Cachex.get!(cache, 2) == 2 + assert Cachex.get(cache, 1) == 1 + assert Cachex.get(cache, 2) == 2 end test "warmers which set values with options" do @@ -65,10 +65,10 @@ defmodule Cachex.WarmerTest do cache = TestUtils.create_cache(warmers: [warmer(module: :options_warmer)]) # check that the key was warmed - assert Cachex.get!(cache, 1) == 1 + assert Cachex.get(cache, 1) == 1 # check that there's a TTL - assert Cachex.ttl!(cache, 1) != nil + assert Cachex.ttl(cache, 1) != nil end test "warmers which don't set values" do @@ -81,7 +81,7 @@ defmodule Cachex.WarmerTest do cache = TestUtils.create_cache(warmers: [warmer(module: :ignore_warmer)]) # check that the cache is empty - assert Cachex.empty?(!cache) + assert Cachex.empty?(cache) end test "warmers which aren't blocking" do @@ -96,7 +96,7 @@ defmodule Cachex.WarmerTest do cache = TestUtils.create_cache(warmers: [warmer]) # check that the key was not warmed - assert Cachex.get!(cache, 1) == nil + assert Cachex.get(cache, 1) == nil end test "providing warmers with states" do @@ -112,7 +112,7 @@ defmodule Cachex.WarmerTest do cache = TestUtils.create_cache(warmers: [warmer(module: :state_warmer, state: state)]) # check that the key was warmed with state - assert Cachex.get!(cache, "state") == state + assert Cachex.get(cache, "state") == state end test "triggering cache hooks from within warmers" do @@ -139,8 +139,8 @@ defmodule Cachex.WarmerTest do ) # ensure that we receive the creation of both warmers - assert_receive({{:put_many, [[{1, 1}], []]}, {:ok, true}}) - assert_receive({{:put_many, [[{2, 2}], []]}, {:ok, true}}) + assert_receive {{:put_many, [[{1, 1}], []]}, :ok} + assert_receive {{:put_many, [[{2, 2}], []]}, :ok} end test "accessing $callers in warmers" do @@ -165,6 +165,6 @@ defmodule Cachex.WarmerTest do ) # check callers are just us - assert_receive([^parent]) + assert_receive [^parent] end end diff --git a/test/cachex_test.exs b/test/cachex_test.exs index 7f590a3f..90bd8637 100644 --- a/test/cachex_test.exs +++ b/test/cachex_test.exs @@ -17,20 +17,20 @@ defmodule CachexTest do {:ok, pid1} = Cachex.start_link(name1) # check valid pid - assert(is_pid(pid1)) - assert(Process.alive?(pid1)) + assert is_pid(pid1) + assert Process.alive?(pid1) # this process should die spawn(fn -> {:ok, pid} = Cachex.start_link(name2) - assert(is_pid(pid)) + assert is_pid(pid) end) # wait for spawn to end :timer.sleep(15) # process should've died - assert(Process.whereis(name2) == nil) + assert Process.whereis(name2) == nil end # Ensures that we're able to start a cache without a link to the current process. @@ -49,20 +49,20 @@ defmodule CachexTest do {:ok, pid1} = Cachex.start(name1) # check valid pid - assert(is_pid(pid1)) - assert(Process.alive?(pid1)) + assert is_pid(pid1) + assert Process.alive?(pid1) # this process should die spawn(fn -> {:ok, pid} = Cachex.start(name2) - assert(is_pid(pid)) + assert is_pid(pid) end) # wait for spawn to end :timer.sleep(5) # process should've lived - refute(Process.whereis(name2) == nil) + refute Process.whereis(name2) == nil end # Ensures that trying to start a cache when the application has not been started @@ -88,7 +88,7 @@ defmodule CachexTest do {:error, reason} = Cachex.start_link(name) # we should receive a prompt to start our application properly - assert(reason == :not_started) + assert reason == :not_started end # This test does a simple check that a cache must be started with a valid atom @@ -116,7 +116,7 @@ defmodule CachexTest do {:error, reason} = Cachex.start_link(name, hooks: hook(module: Missing)) # we should've received an atom warning - assert(reason == :invalid_hook) + assert reason == :invalid_hook end # Naturally starting a cache when a cache already exists with the same name will @@ -133,16 +133,16 @@ defmodule CachexTest do {:ok, pid} = Cachex.start_link(name) # check valid pid - assert(is_pid(pid)) - assert(Process.alive?(pid)) + assert is_pid(pid) + assert Process.alive?(pid) # try to start a cache with the same name {:error, reason1} = Cachex.start_link(name) {:error, reason2} = Cachex.start(name) # match the reason to be more granular - assert(reason1 == {:already_started, pid}) - assert(reason2 == {:already_started, pid}) + assert reason1 == {:already_started, pid} + assert reason2 == {:already_started, pid} end # We also need to make sure that a cache function executed against an invalid @@ -153,13 +153,15 @@ defmodule CachexTest do # fetch a name name = TestUtils.create_name() - # try to execute a cache action against a missing cache and an invalid name - {:error, reason1} = Cachex.execute(name, & &1) - {:error, reason2} = Cachex.execute("na", & &1) + # try to execute a cache action against a missing cache + assert_raise ArgumentError, ~r/no cache available:/, fn -> + Cachex.execute(name, & &1) + end - # match the reason to be more granular - assert(reason1 == :no_cache) - assert(reason2 == :no_cache) + # try to execute a cache action against an invalid name + assert_raise ArgumentError, ~r/no cache available:/, fn -> + Cachex.execute("na", & &1) + end end # This tests ensures that we provide delegate functions for Cachex functions @@ -172,54 +174,37 @@ defmodule CachexTest do |> Cachex.__info__() |> Keyword.drop([:child_spec, :init, :start, :start_link]) - # it has to always be even (one signature creates ! versions) - assert(rem(length(definitions), 2) == 0) - - # verify the size to cause errors on addition/removal - assert(length(definitions) == 146) - # validate all definitions for {name, arity} <- definitions do # create name as string name_st = "#{name}" # generate the new definition - inverse = - if String.ends_with?(name_st, "!") do - :"#{String.replace_trailing(name_st, "!", "")}" - else - :"#{name_st}!" - end - - # ensure the definitions contains the inverse - assert({inverse, arity} in definitions) + if String.ends_with?(name_st, "!") do + # ensure the definitions contains the inverse + assert {:"#{String.replace_trailing(name_st, "!", "")}", arity} in definitions + end end # create a basic test cache cache = TestUtils.create_cache() # validate an unsafe call to test handling - assert_raise(Cachex.Error, fn -> - Cachex.get!(:missing_cache, "key") - end) - - # validate an unsafe call to test handling - assert_raise(Cachex.Error, fn -> + assert_raise Cachex.Error, fn -> Cachex.transaction!(cache, ["key"], fn _key -> raise RuntimeError, message: "Ding dong! The witch is dead!" end) - end) + end # validate an unsafe call to fetch handling - assert_raise(Cachex.Error, fn -> + assert_raise Cachex.Error, fn -> Cachex.fetch!(cache, "key", fn _key -> raise RuntimeError, message: "Which old witch? The wicked witch!" end) - end) + end - # verify both unpacking pac - nil = Cachex.get!(cache, "key") - nil = Cachex.fetch!(cache, "key", fn _ -> nil end) + # verify unpacking a commit tuple + assert Cachex.fetch!(cache, "key", fn _ -> nil end) == nil end # This test validates `Cachex.start_link/1` mtaintains compatibility @@ -242,13 +227,13 @@ defmodule CachexTest do {:ok, _pid} = Cachex.start_link(name: :child_spec6, transactions: true) # verify the caches that are created only from a name - {:ok, cache()} = Cachex.inspect(:child_spec1, :cache) - {:ok, cache()} = Cachex.inspect(:child_spec3, :cache) - {:ok, cache()} = Cachex.inspect(:child_spec5, :cache) + cache() = Cachex.inspect(:child_spec1, :cache) + cache() = Cachex.inspect(:child_spec3, :cache) + cache() = Cachex.inspect(:child_spec5, :cache) # double check those with options provided by double checking the value - {:ok, cache(transactions: true)} = Cachex.inspect(:child_spec2, :cache) - {:ok, cache(transactions: true)} = Cachex.inspect(:child_spec4, :cache) - {:ok, cache(transactions: true)} = Cachex.inspect(:child_spec6, :cache) + cache(transactions: true) = Cachex.inspect(:child_spec2, :cache) + cache(transactions: true) = Cachex.inspect(:child_spec4, :cache) + cache(transactions: true) = Cachex.inspect(:child_spec6, :cache) end end