platform/kernel/linux-3.10.git
9 years agodma-buf: add fcntl system call support
Inki Dae [Thu, 20 Nov 2014 13:30:18 +0000 (22:30 +0900)]
dma-buf: add fcntl system call support

This patch adds lock callback to dmabuf framework. And this callback
will be called by fcntl request.

With this patch, fcntl system call can be used by userspace application
for they can use dmabuf sync mechanism.

Change-Id: Id3631cbc21e84c986e2efe040881e401ade180e8
Signed-off-by: Inki Dae <inki.dae@samsung.com>
9 years agodma-buf/dmabuf-sync: add dmabuf sync framework
Inki Dae [Thu, 20 Nov 2014 13:29:39 +0000 (22:29 +0900)]
dma-buf/dmabuf-sync: add dmabuf sync framework

The DMA Buffer synchronization API provides buffer synchronization
mechanism based on DMA buffer sharing machanism[1], dmafence and
reservation frameworks[2];
i.e., buffer access control to CPU and DMA, and easy-to-use interfaces
for device drivers and user application. And this API can be used
for all dma devices using system memory as dma buffer, especially
for most ARM based SoCs.

For more details, please refer to Documentation/dma-buf-syc.txt

[1] http://lwn.net/Articles/470339/
[2] https://lkml.org/lkml/2014/2/24/824

Change-Id: I3b2084a3c331fc06992fa8d2a4c71378e88b10b5
Signed-off-by: Inki Dae <inki.dae@samsung.com>
9 years agolocal: drm/exynos: fix dmabuf variable name
Chanho Park [Fri, 22 Aug 2014 08:53:34 +0000 (17:53 +0900)]
local: drm/exynos: fix dmabuf variable name

Change-Id: Ie106e49107d687d053259fa35f889374a8cc0924
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
9 years agogpu/drm: fix compile error since backported
Chanho Park [Fri, 22 Aug 2014 08:41:21 +0000 (17:41 +0900)]
gpu/drm: fix compile error since backported

Change-Id: I5c9a62578057b164898c8f7880d0566e813dba65
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
9 years agodrm/vma: add access management helpers
David Herrmann [Sun, 25 Aug 2013 16:28:57 +0000 (18:28 +0200)]
drm/vma: add access management helpers

The VMA offset manager uses a device-global address-space. Hence, any
user can currently map any offset-node they want. They only need to guess
the right offset. If we wanted per open-file offset spaces, we'd either
need VM_NONLINEAR mappings or multiple "struct address_space" trees. As
both doesn't really scale, we implement access management in the VMA
manager itself.

We use an rb-tree to store open-files for each VMA node. On each mmap
call, GEM, TTM or the drivers must check whether the current user is
allowed to map this file.

We add a separate lock for each node as there is no generic lock available
for the caller to protect the node easily.

As we currently don't know whether an object may be used for mmap(), we
have to do access management for all objects. If it turns out to slow down
handle creation/deletion significantly, we can optimize it in several
ways:
 - Most times only a single filp is added per bo so we could use a static
   "struct file *main_filp" which is checked/added/removed first before we
   fall back to the rbtree+drm_vma_offset_file.
   This could be even done lockless with rcu.
 - Let user-space pass a hint whether mmap() should be supported on the
   bo and avoid access-management if not.
 - .. there are probably more ideas once we have benchmarks ..

v2: add drm_vma_node_verify_access() helper

Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/mm: add "best_match" flag to drm_mm_insert_node()
David Herrmann [Sat, 27 Jul 2013 11:36:27 +0000 (13:36 +0200)]
drm/mm: add "best_match" flag to drm_mm_insert_node()

Add a "best_match" flag similar to the drm_mm_search_*() helpers so we
can convert TTM to use them in follow up patches. We can also inline the
non-generic helpers and move them into the header to allow compile-time
optimizations.

To make calls to drm_mm_{search,insert}_node() more readable, this
converts the boolean argument to a flagset. There are pending patches that
add additional flags for top-down allocators and more.

v2:
 - use flag parameter instead of boolean "best_match"
 - convert *_search_free() helpers to also use flags argument

Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Conflicts:
drivers/gpu/drm/i915/i915_gem.c

Change-Id: I77640db74616de3c9ae874531f71bbd81b89d5fa

9 years agodrm/vma: provide drm_vma_node_unmap() helper
David Herrmann [Wed, 24 Jul 2013 19:10:03 +0000 (21:10 +0200)]
drm/vma: provide drm_vma_node_unmap() helper

Instead of unmapping the nodes in TTM and GEM users manually, we provide
a generic wrapper which does the correct thing for all vma-nodes.

v2: remove bdev->dev_mapping test in ttm_bo_unmap_virtual_unlocked() as
ttm_mem_io_free_vm() does nothing in that case (io_reserved_vm is 0).
v4: Fix docbook comments
v5: use drm_vma_node_size()

Cc: Dave Airlie <airlied@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Conflicts:
drivers/gpu/drm/ttm/ttm_bo.c

Change-Id: I4be1eeef8e5b4e81b5966449e2bf3691d8270aae

9 years agodrm: add unified vma offset manager
David Herrmann [Wed, 24 Jul 2013 19:06:15 +0000 (21:06 +0200)]
drm: add unified vma offset manager

If we want to map GPU memory into user-space, we need to linearize the
addresses to not confuse mm-core. Currently, GEM and TTM both implement
their own offset-managers to assign a pgoff to each object for user-space
CPU access. GEM uses a hash-table, TTM uses an rbtree.

This patch provides a unified implementation that can be used to replace
both. TTM allows partial mmaps with a given offset, so we cannot use
hashtables as the start address may not be known at mmap time. Hence, we
use the rbtree-implementation of TTM.

We could easily update drm_mm to use an rbtree instead of a linked list
for it's object list and thus drop the rbtree from the vma-manager.
However, this would slow down drm_mm object allocation for all other
use-cases (rbtree insertion) and add another 4-8 bytes to each mm node.
Hence, use the separate tree but allow for later migration.

This is a rewrite of the 2012-proposal by David Airlie <airlied@linux.ie>

v2:
 - fix Docbook integration
 - drop drm_mm_node_linked() and use drm_mm_node_allocated()
 - remove unjustified likely/unlikely usage (but keep for rbtree paths)
 - remove BUG_ON() as drm_mm already does that
 - clarify page-based vs. byte-based addresses
 - use drm_vma_node_reset() for initialization, too
v4:
 - allow external locking via drm_vma_offset_un/lock_lookup()
 - add locked lookup helper drm_vma_offset_lookup_locked()
v5:
 - fix drm_vma_offset_lookup() to correctly validate range-mismatches
   (fix (offset > start + pages))
 - fix drm_vma_offset_exact_lookup() to actually do what it says
 - remove redundant vm_pages member (add drm_vma_node_size() helper)
 - remove unneeded goto
 - fix documentation

Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Conflicts:
Documentation/DocBook/drm.tmpl
drivers/gpu/drm/Makefile

Change-Id: If3427d06b0f9b24c65268912bb75c1b90fe9ad26

9 years agolocal: fence: use smp_mb__before_atomic_inc
Chanho Park [Fri, 22 Aug 2014 08:31:30 +0000 (17:31 +0900)]
local: fence: use smp_mb__before_atomic_inc

The smp_mb__before_atomic is hard to merge it because there is too many
precedence patches. Thus, just use smp_mb__before_atomic_inc because it always
convert to smb_mb.

Change-Id: Ia31d488eaf218cc4585d9256457855e1a9d6b321
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
9 years agodrm/gem: completely close gem_open vs. gem_close races
Daniel Vetter [Wed, 14 Aug 2013 22:02:45 +0000 (00:02 +0200)]
drm/gem: completely close gem_open vs. gem_close races

The gem flink name holds a reference onto the object itself, and this
self-reference would prevent an flink'ed object from every being
freed. To break that loop we remove the flink name when the last
userspace handle disappears, i.e. when obj->handle_count reaches 0.

Now in gem_open we drop the dev->object_name_lock between the flink
name lookup and actually adding the handle. This means a concurrent
gem_close of the last handle could result in the flink name getting
reaped right inbetween, i.e.

Thread 1 Thread 2
gem_open gem_close

flink -> obj lookup
handle_count drops to 0
remove flink name
create_handle
handle_count++

If someone now flinks this object again, we'll get a new flink name.

We can close this race by removing the lock dropping and making the
entire lookup+handle_create sequence atomic. Unfortunately to still be
able to share the handle_create logic this requires a
handle_create_tail function which drops the lock - we can't hold the
object_name_lock while calling into a driver's ->gem_open callback.

Note that for flink fixing this race isn't really important, since
racing gem_open against gem_close is clearly a userspace bug. And no
matter how the race ends, we won't leak any references.

But with dma-buf where the userspace dma-buf fd itself is refcounted
this is a valid sequence and hence we should fix it. Therefore this
patch here is just a warm-up exercise (and for consistency between
flink buffer sharing and dma-buf buffer sharing with self-imports).

Also note that this extension of the critical section in gem_open
protected by dev->object_name_lock only works because it's now a
mutex: A spinlock would conflict with the potential memory allocation
in idr_preload().

This is exercises by igt/gem_flink_race/flink_name.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/gem: switch dev->object_name_lock to a mutex
Daniel Vetter [Wed, 14 Aug 2013 22:02:44 +0000 (00:02 +0200)]
drm/gem: switch dev->object_name_lock to a mutex

I want to wrap the creation of a dma-buf from a gem object in it,
so that the obj->export_dma_buf cache can be atomically filled in.

Instead of creating a new mutex just for that variable I've figured
I can reuse the existing dev->object_name_lock, especially since
the new semantics will exactly mirror the flink obj->name already
protected by that lock.

v2: idr_preload/idr_preload_end is now an atomic section, so need to
move the mutex locking outside.

[airlied: fix up conflict with patch to make debugfs use lock]

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Conflicts:
drivers/gpu/drm/drm_info.c

Change-Id: Ic4ca630b9c6092c942208ee9a04409d4f6561fc0

9 years agodrm/gem: make drm_gem_object_handle_unreference_unlocked static
Daniel Vetter [Wed, 14 Aug 2013 22:02:39 +0000 (00:02 +0200)]
drm/gem: make drm_gem_object_handle_unreference_unlocked static

No one outside of drm should use this, the official interfaces are
drm_gem_handle_create and drm_gem_handle_delete. The handle refcounting
is purely an implementation detail of gem.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/gem: fix up flink name create race
Daniel Vetter [Wed, 14 Aug 2013 22:02:37 +0000 (00:02 +0200)]
drm/gem: fix up flink name create race

This is the 2nd attempt, I've always been a bit dissatisified with the
tricky nature of the first one:

http://lists.freedesktop.org/archives/dri-devel/2012-July/025451.html

The issue is that the flink ioctl can race with calling gem_close on
the last gem handle. In that case we'll end up with a zero handle
count, but an flink name (and it's corresponding reference). Which
results in a neat space leak.

In my first attempt I've solved this by rechecking the handle count.
But fundamentally the issue is that ->handle_count isn't your usual
refcount - it can be resurrected from 0 among other things.

For those special beasts atomic_t often suggest way more ordering that
it actually guarantees. To prevent being tricked by those hairy
semantics take the easy way out and simply protect the handle with the
existing dev->object_name_lock.

With that change implemented it's dead easy to fix the flink vs. gem
close reace: When we try to create the name we simply have to check
whether there's still officially a gem handle around and if not refuse
to create the flink name. Since the handle count decrement and flink
name destruction is now also protected by that lock the reace is gone
and we can't ever leak the flink reference again.

Outside of the drm core only the exynos driver looks at the handle
count, and tbh I have no idea why (it's just for debug dmesg output
luckily).

I've considered inlining the drm_gem_object_handle_free, but I plan to
add more name-like things (like the exported dma_buf) to this scheme,
so it's clearer to leave the handle freeing in its own function.

This is exercised by the new gem_flink_race i-g-t testcase, which on
my snb leaks gem objects at a rate of roughly 1k objects/s.

v2: Fix up the error path handling in handle_create and make it more
robust by simply calling object_handle_unreference.

v3: Fix up the handle_unreference logic bug - atomic_dec_and_test
retursn 1 for 0. Oops.

v4: Squash in inlining of drm_gem_object_handle_reference as suggested
by Dave Airlie and add a note that we now have a testcase.

Cc: Dave Airlie <airlied@gmail.com>
Cc: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/gem: WARN about unbalanced handle refcounts
Daniel Vetter [Wed, 14 Aug 2013 22:02:36 +0000 (00:02 +0200)]
drm/gem: WARN about unbalanced handle refcounts

Trying to drop a reference we don't have is a pretty serious bug.
Trying to paper over it is an even worse offense.

So scream into dmesg with a big WARN in case that ever happens.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/gem: remove bogus NULL check from drm_gem_object_handle_unreference_unlocked
Daniel Vetter [Wed, 14 Aug 2013 22:02:35 +0000 (00:02 +0200)]
drm/gem: remove bogus NULL check from drm_gem_object_handle_unreference_unlocked

Calling this function with a NULL object is simply a bug, so papering
over a NULL object not a good idea.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/gem: move drm_gem_object_handle_unreference_unlocked into drm_gem.c
Daniel Vetter [Wed, 14 Aug 2013 22:02:34 +0000 (00:02 +0200)]
drm/gem: move drm_gem_object_handle_unreference_unlocked into drm_gem.c

We have three callers of this function now and it's neither
performance critical nor really small. So an inline function feels
like overkill and unecessarily separates the different parts of the
code.

Since all callers of drm_gem_object_handle_free are now in drm_gem.c
we can make that static (and remove the unused EXPORT_SYMBOL). To
avoid a forward declaration move it (and drm_gem_object_free_bug) up a
bit.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/gem: remove drm_gem_object_handle_unreference
Daniel Vetter [Tue, 16 Jul 2013 07:11:56 +0000 (09:11 +0200)]
drm/gem: remove drm_gem_object_handle_unreference

It's unused, everyone is using the _unlocked variant only.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
9 years agodrm/gem: add shmem get/put page helpers
Rob Clark [Wed, 7 Aug 2013 17:41:24 +0000 (13:41 -0400)]
drm/gem: add shmem get/put page helpers

Basically just extracting some code duplicated in gma500, omapdrm, udl,
and upcoming msm driver.

Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/gem: add drm_gem_create_mmap_offset_size()
Rob Clark [Wed, 7 Aug 2013 17:41:23 +0000 (13:41 -0400)]
drm/gem: add drm_gem_create_mmap_offset_size()

Variant of drm_gem_create_mmap_offset() which doesn't make the
assumption that virtual size and physical size (obj->size) are the same.
This is needed in omapdrm to deal with tiled buffers.  And lets us get
rid of a duplicated and slightly modified version of
drm_gem_create_mmap_offset() in omapdrm.

Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/gem: create drm_gem_dumb_destroy
Daniel Vetter [Tue, 16 Jul 2013 07:12:04 +0000 (09:12 +0200)]
drm/gem: create drm_gem_dumb_destroy

All the gem based kms drivers really want the same function to
destroy a dumb framebuffer backing storage object.

So give it to them and roll it out in all drivers.

This still leaves the option open for kms drivers which don't use GEM
for backing storage, but it does decently simplify matters for gem
drivers.

Acked-by: Inki Dae <inki.dae@samsung.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>
Cc: Ben Skeggs <skeggsb@gmail.com>
Reviwed-by: Rob Clark <robdclark@gmail.com>
Cc: Alex Deucher <alexdeucher@gmail.com>
Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Conflicts:
drivers/gpu/drm/rcar-du/rcar_du_drv.c

Change-Id: I991aad3f0745732f203a85ff8b5f43e328c045a6

9 years agodrm/gem: fix mmap vma size calculations
David Herrmann [Fri, 26 Jul 2013 10:09:32 +0000 (12:09 +0200)]
drm/gem: fix mmap vma size calculations

The VMA manager is page-size based so drm_vma_node_size() returns the size
in pages. However, drm_gem_mmap_obj() requires the size in bytes. Apply
PAGE_SHIFT so we no longer get EINVAL during mmaps due to too small
buffers.

This bug was introduced in commit:
  0de23977cfeb5b357ec884ba15417ae118ff9e9b
  "drm/gem: convert to new unified vma manager"

Fixes i915 gtt mmap failure reported by Sedat Dilek in:
  Re: linux-next: Tree for Jul 25 [ call-trace: drm | drm-intel related? ]

Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reported-by: Sedat Dilek <sedat.dilek@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
9 years agodrm/gem: convert to new unified vma manager
David Herrmann [Wed, 24 Jul 2013 19:07:52 +0000 (21:07 +0200)]
drm/gem: convert to new unified vma manager

Use the new vma manager instead of the old hashtable. Also convert all
drivers to use the new convenience helpers. This drops all the
(map_list.hash.key << PAGE_SHIFT) non-sense.

Locking and access-management is exactly the same as before with an
additional lock inside of the vma-manager, which strictly wouldn't be
needed for gem.

v2:
 - rebase on drm-next
 - init nodes via drm_vma_node_reset() in drm_gem.c
v3:
 - fix tegra
v4:
 - remove duplicate if (drm_vma_node_has_offset()) checks
 - inline now trivial drm_vma_node_offset_addr() calls
v5:
 - skip node-reset on gem-init due to kzalloc()
 - do not allow mapping gem-objects with offsets (backwards compat)
 - remove unneccessary casts

Cc: Inki Dae <inki.dae@samsung.com>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Thierry Reding <thierry.reding@gmail.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
9 years agodrm/gem: simplify object initialization
David Herrmann [Thu, 11 Jul 2013 09:56:32 +0000 (11:56 +0200)]
drm/gem: simplify object initialization

drm_gem_object_init() and drm_gem_private_object_init() do exactly the
same (except for shmem alloc) so make the first use the latter to reduce
code duplication.

Also drop the return code from drm_gem_private_object_init(). It seems
unlikely that we will extend it any time soon so no reason to keep it
around. This simplifies code paths in drivers, too.

Last but not least, fix gma500 to call drm_gem_object_release() before
freeing objects that were allocated via drm_gem_private_object_init().
That isn't actually necessary for now, but might be in the future.

Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Acked-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
9 years agodrm: make drm_mm_init() return void
David Herrmann [Mon, 1 Jul 2013 18:32:58 +0000 (20:32 +0200)]
drm: make drm_mm_init() return void

There is no reason to return "int" as this function never fails.
Furthermore, several drivers (ast, sis) already depend on this.

Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/gem: add mutex lock when using drm_gem_mmap_obj
YoungJun Cho [Wed, 26 Jun 2013 23:39:58 +0000 (08:39 +0900)]
drm/gem: add mutex lock when using drm_gem_mmap_obj

The drm_gem_mmap_obj() has to be protected with dev->struct_mutex,
but some caller functions do not. So it adds mutex lock to missing
callers and adds assertion to check whether drm_gem_mmap_obj() is
called with mutex lock or not.

Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Conflicts:
drivers/gpu/drm/drm_gem_cma_helper.c
drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c

Change-Id: Icb683c218b3455f113c073c33166faab5a7fcc4c

9 years agodrm/gem: Split drm_gem_mmap() into object search and object mapping
Laurent Pinchart [Tue, 16 Apr 2013 12:14:52 +0000 (14:14 +0200)]
drm/gem: Split drm_gem_mmap() into object search and object mapping

The drm_gem_mmap() function first finds the GEM object to be mapped
based on the fake mmap offset and then maps the object. Split the object
mapping code into a standalone drm_gem_mmap_obj() function that can be
used to implement dma-buf mmap() operations.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
9 years agodrm/prime: Simplify drm_gem_remove_prime_handles
Daniel Vetter [Wed, 14 Aug 2013 22:02:47 +0000 (00:02 +0200)]
drm/prime: Simplify drm_gem_remove_prime_handles

with the reworking semantics and locking of the obj->dma_buf pointer
this pointer is always set as long as there's still a gem handle
around and a dma_buf associated with this gem object.

Also, the per file-priv lookup-cache for dma-buf importing is also
unified between foreign and native objects.

Hence we don't need to special case the clean any more and can simply
drop the clause which only runs for foreing objects, i.e. with
obj->import_attach set.

Note that with this change (actually with the previous one to always
set up obj->dma_buf even for foreign objects) it is no longer required
to set obj->import_attach when importing a foreing object. So update
comments accordingly, too.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agoseqcount: Add lockdep functionality to seqcount/seqlock structures
John Stultz [Mon, 7 Oct 2013 22:51:59 +0000 (15:51 -0700)]
seqcount: Add lockdep functionality to seqcount/seqlock structures

Currently seqlocks and seqcounts don't support lockdep.

After running across a seqcount related deadlock in the timekeeping
code, I used a less-refined and more focused variant of this patch
to narrow down the cause of the issue.

This is a first-pass attempt to properly enable lockdep functionality
on seqlocks and seqcounts.

Since seqcounts are used in the vdso gettimeofday code, I've provided
non-lockdep accessors for those needs.

I've also handled one case where there were nested seqlock writers
and there may be more edge cases.

Comments and feedback would be appreciated!

Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: netdev@vger.kernel.org
Link: http://lkml.kernel.org/r/1381186321-4906-3-git-send-email-john.stultz@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoseqlock: Add a new locking reader type
Waiman Long [Thu, 12 Sep 2013 14:55:34 +0000 (10:55 -0400)]
seqlock: Add a new locking reader type

The sequence lock (seqlock) was originally designed for the cases where
the readers do not need to block the writers by making the readers retry
the read operation when the data change.

Since then, the use cases have been expanded to include situations where
a thread does not need to change the data (effectively a reader) at all
but have to take the writer lock because it can't tolerate changes to
the protected structure.  Some examples are the d_path() function and
the getcwd() syscall in fs/dcache.c where the functions take the writer
lock on rename_lock even though they don't need to change anything in
the protected data structure at all.  This is inefficient as a reader is
now blocking other sequence number reading readers from moving forward
by pretending to be a writer.

This patch tries to eliminate this inefficiency by introducing a new
type of locking reader to the seqlock locking mechanism.  This new
locking reader will try to take an exclusive lock preventing other
writers and locking readers from going forward.  However, it won't
affect the progress of the other sequence number reading readers as the
sequence number won't be changed.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
9 years agolockdep: Introduce lock_acquire_exclusive()/shared() helper macros
Michel Lespinasse [Mon, 8 Jul 2013 21:23:49 +0000 (14:23 -0700)]
lockdep: Introduce lock_acquire_exclusive()/shared() helper macros

In lockdep.h, the spinlock/mutex/rwsem/rwlock/lock_map acquire macros have
different definitions based on the value of CONFIG_PROVE_LOCKING.  We have
separate ifdefs for each of these definitions, which seems redundant.

Introduce lock_acquire_{exclusive,shared,shared_recursive} helpers which
will have different definitions based on CONFIG_PROVE_LOCKING.  Then all
other helper macros can be defined based on the above ones, which reduces
the amount of ifdefined code.

Signed-off-by: Michel Lespinasse <walken@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/20130708212350.6DD1931C15E@corp2gmr1-1.hot.corp.google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoreservation: add suppport for read-only access using rcu
Maarten Lankhorst [Tue, 1 Jul 2014 10:58:00 +0000 (12:58 +0200)]
reservation: add suppport for read-only access using rcu

This adds some extra functions to deal with rcu.

reservation_object_get_fences_rcu() will obtain the list of shared
and exclusive fences without obtaining the ww_mutex.

reservation_object_wait_timeout_rcu() will wait on all fences of the
reservation_object, without obtaining the ww_mutex.

reservation_object_test_signaled_rcu() will test if all fences of the
reservation_object are signaled without using the ww_mutex.

reservation_object_get_excl and reservation_object_get_list require
the reservation object to be held, updating requires
write_seqcount_begin/end. If only the exclusive fence is needed,
rcu_dereference followed by fence_get_rcu can be used, if the shared
fences are needed it's recommended to use the supplied functions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Reviewed-By: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
9 years agoreservation: update api and add some helpers
Maarten Lankhorst [Tue, 1 Jul 2014 10:57:54 +0000 (12:57 +0200)]
reservation: update api and add some helpers

Move the list of shared fences to a struct, and return it in
reservation_object_get_list().
Add reservation_object_get_excl to get the exclusive fence.

Add reservation_object_reserve_shared(), which reserves space
in the reservation_object for 1 more shared fence.

reservation_object_add_shared_fence() and
reservation_object_add_excl_fence() are used to assign a new
fence to a reservation_object pointer, to complete a reservation.

Changes since v1:
- Add reservation_object_get_excl, reorder code a bit.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
9 years agodma-buf: add poll support, v3
Maarten Lankhorst [Tue, 1 Jul 2014 10:57:43 +0000 (12:57 +0200)]
dma-buf: add poll support, v3

Thanks to Fengguang Wu for spotting a missing static cast.

v2:
- Kill unused variable need_shared.
v3:
- Clarify the BUG() in dma_buf_release some more. (Rob Clark)

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Conflicts:
drivers/dma-buf/dma-buf.c

Change-Id: I6c0d192dfd53809a16d3564e3863c1d1f0f348c7

9 years agoreservation: add support for fences to enable cross-device synchronisation
Maarten Lankhorst [Tue, 1 Jul 2014 10:57:37 +0000 (12:57 +0200)]
reservation: add support for fences to enable cross-device synchronisation

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
9 years agomutex: Move ww_mutex definitions to ww_mutex.h
Maarten Lankhorst [Fri, 5 Jul 2013 07:29:32 +0000 (09:29 +0200)]
mutex: Move ww_mutex definitions to ww_mutex.h

Move the definitions for wound/wait mutexes out to a separate
header, ww_mutex.h. This reduces clutter in mutex.h, and
increases readability.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: Dave Airlie <airlied@gmail.com>
Link: http://lkml.kernel.org/r/51D675DC.3000907@canonical.com
[ Tidied up the code a bit. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agodma-buf: use reservation objects
Maarten Lankhorst [Tue, 1 Jul 2014 10:57:26 +0000 (12:57 +0200)]
dma-buf: use reservation objects

This allows reservation objects to be used in dma-buf. it's required
for implementing polling support on the fences that belong to a dma-buf.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Mauro Carvalho Chehab <m.chehab@samsung.com> #drivers/media/v4l2-core/
Acked-by: Thomas Hellstrom <thellstrom@vmware.com> #drivers/gpu/drm/ttm
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Vincent Stehlé <vincent.stehle@laposte.net> #drivers/gpu/drm/armada/
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Conflicts:
drivers/gpu/drm/armada/armada_gem.c
drivers/gpu/drm/drm_prime.c
drivers/gpu/drm/exynos/exynos_drm_dmabuf.c
drivers/gpu/drm/i915/i915_gem_dmabuf.c
drivers/gpu/drm/nouveau/nouveau_drm.c
drivers/gpu/drm/nouveau/nouveau_gem.h
drivers/gpu/drm/nouveau/nouveau_prime.c
drivers/gpu/drm/radeon/radeon_drv.c
drivers/gpu/drm/tegra/gem.c
drivers/gpu/drm/ttm/ttm_object.c
drivers/staging/android/ion/ion.c

Change-Id: I44fbb1f41500deaf9067eb5d7e1c6ed758231d69

9 years agodrm/prime: double lock typo
Dan Carpenter [Fri, 23 Aug 2013 20:46:02 +0000 (23:46 +0300)]
drm/prime: double lock typo

There is a typo so deadlocks on error instead of unlocking.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
9 years agodrm/prime: Always add exported buffers to the handle cache
Daniel Vetter [Wed, 14 Aug 2013 22:02:49 +0000 (00:02 +0200)]
drm/prime: Always add exported buffers to the handle cache

... not only when the dma-buf is freshly created. In contrived
examples someone else could have exported/imported the dma-buf already
and handed us the gem object with a flink name. If such on object gets
reexported as a dma_buf we won't have it in the handle cache already,
which breaks the guarantee that for dma-buf imports we always hand
back an existing handle if there is one.

This is exercised by igt/prime_self_import/with_one_bo_two_files

Now if we extend the locked sections just a notch more we can also
plug th racy buf/handle cache setup in handle_to_fd:

If evil userspace races a concurrent gem close against a prime export
operation we can end up tearing down the gem handle before the dma buf
handle cache is set up. When handle_to_fd gets around to adding the
handle to the cache there will be no one left to clean it up,
effectily leaking the bo (and the dma-buf, since the handle cache
holds a ref on the dma-buf):

Thread A Thread B

handle_to_fd:

lookup gem object from handle
creates new dma_buf

gem_close on the same handle
obj->dma_buf is set, but file priv buf
handle cache has no entry

obj->handle_count drops to 0

drm_prime_add_buf_handle sets up the handle cache

-> We have a dma-buf reference in the handle cache, but since the
handle_count of the gem object already dropped to 0 no on will clean
it up. When closing the drm device fd we'll hit the WARN_ON in
drm_prime_destroy_file_private.

The important change is to extend the critical section of the
filp->prime.lock to cover the gem handle lookup. This serializes with
a concurrent gem handle close.

This leak is exercised by igt/prime_self_import/export-vs-gem_close-race

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: make drm_prime_lookup_buf_handle static
Daniel Vetter [Wed, 14 Aug 2013 22:02:48 +0000 (00:02 +0200)]
drm/prime: make drm_prime_lookup_buf_handle static

... and move it to the top of the function to avoid a forward
declaration.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: proper locking+refcounting for obj->dma_buf link
Daniel Vetter [Wed, 14 Aug 2013 22:02:46 +0000 (00:02 +0200)]
drm/prime: proper locking+refcounting for obj->dma_buf link

The export dma-buf cache is semantically similar to an flink name. So
semantically it makes sense to treat it the same and remove the name
(i.e. the dma_buf pointer) and its references when the last gem handle
disappears.

Again we need to be careful, but double so: Not just could someone
race and export with a gem close ioctl (so we need to recheck
obj->handle_count again when assigning the new name), but multiple
exports can also race against each another. This is prevented by
holding the dev->object_name_lock across the entire section which
touches obj->dma_buf.

With the new scheme we also need to reinstate the obj->dma_buf link at
import time (in case the only reference userspace has held in-between
was through the dma-buf fd and not through any native gem handle). For
simplicity we don't check whether it's a native object but
unconditionally set up that link - with the new scheme of removing the
obj->dma_buf reference when the last handle disappears we can do that.

To make it clear that this is not just for exported buffers anymore
als rename it from export_dma_buf to dma_buf.

To make sure that now one can race a fd_to_handle or handle_to_fd with
gem_close we use the same tricks as in flink of extending the
dev->object_name_locking critical section. With this change we finally
have a guaranteed 1:1 relationship (at least for native objects)
between gem objects and dma-bufs, even accounting for races (which can
happen since the dma-buf itself holds a reference while in-flight).

This prevent igt/prime_self_import/export-vs-gem_close-race from
Oopsing the kernel. There is still a leak though since the per-file
priv dma-buf/handle cache handling is racy. That will be fixed in a
later patch.

v2: Remove the bogus dma_buf_put from the export_and_register_object
failure path if we've raced with the handle count dropping to 0.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Conflicts:
drivers/gpu/drm/drm_gem.c

Change-Id: I915b0e73cedffa0ba358cf00510e19dccfcb4703

9 years agodrm/prime: clarify logic a bit in drm_gem_prime_fd_to_handle
Daniel Vetter [Wed, 14 Aug 2013 22:02:43 +0000 (00:02 +0200)]
drm/prime: clarify logic a bit in drm_gem_prime_fd_to_handle

if (!ret) implies that ret == 0, so no need to clear it again. And
explicitly check for ret == 0 to indicate that we're checking an errno
integer.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: shrink critical section protected by prime lock
Daniel Vetter [Wed, 14 Aug 2013 22:02:42 +0000 (00:02 +0200)]
drm/prime: shrink critical section protected by prime lock

When exporting a gem object as a dma-buf the critical section for the
per-fd prime lock is just the adding (and in case of errors, removing)
of the handle to the per-fd lookup cache.

So restrict the critical section to just that part of the function.

This simplifies later reordering.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: use proper pointer in drm_gem_prime_handle_to_fd
Daniel Vetter [Wed, 14 Aug 2013 22:02:41 +0000 (00:02 +0200)]
drm/prime: use proper pointer in drm_gem_prime_handle_to_fd

Part of the function uses the properly-typed dmabuf variable, the
other an untyped void *buf. Kill the later.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: fix error path in drm_gem_prime_fd_to_handle
Daniel Vetter [Wed, 14 Aug 2013 22:02:38 +0000 (00:02 +0200)]
drm/prime: fix error path in drm_gem_prime_fd_to_handle

handle_unreference only clears up the obj->name and the reference,
but would leave a dangling handle in the idr. The right thing
to do is to call handle_delete.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: remove cargo-cult locking from map_sg helper
Daniel Vetter [Wed, 14 Aug 2013 22:02:32 +0000 (00:02 +0200)]
drm/prime: remove cargo-cult locking from map_sg helper

I've checked both implementations (radeon/nouveau) and they both grab
the page array from ttm simply by dereferencing it and then wrapping
it up with drm_prime_pages_to_sg in the callback and map it with
dma_map_sg (in the helper).

Only the grabbing of the underlying page array is anything we need to
be concerned about, and either those pages are pinned independently,
or we're screwed no matter what.

And indeed, nouveau/radeon pin the backing storage in their
attach/detach functions.

Since I've created this patch cma prime support for dma_buf was added.
drm_gem_cma_prime_get_sg_table only calls kzalloc and the creates&maps
the sg table with dma_get_sgtable. It doesn't touch any gem object
state otherwise. So the cma helpers also look safe.

The only thing we might claim it does is prevent concurrent mapping of
dma_buf attachments. But a) that's not allowed and b) the current code
is racy already since it checks whether the sg mapping exists _before_
grabbing the lock.

So the dev->struct_mutex locking here does absolutely nothing useful,
but only distracts. Remove it.

This should also help Maarten's work to eventually pin the backing
storage more dynamically by preventing locking inversions around
dev->struct_mutex.

v2: Add analysis for recently added cma helper prime code.

Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm: add mmap function to prime helpers
Joonyoung Shim [Fri, 28 Jun 2013 05:24:53 +0000 (14:24 +0900)]
drm: add mmap function to prime helpers

This adds to call low-level mmap() from prime helpers.

Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: fix sgt NULL checking
Joonyoung Shim [Thu, 4 Jul 2013 07:19:12 +0000 (16:19 +0900)]
drm/prime: fix sgt NULL checking

The drm_gem_map_detach() can be called with sgt is NULL.

Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: fix up handle_to_fd ioctl return value
Daniel Vetter [Tue, 2 Jul 2013 07:18:39 +0000 (09:18 +0200)]
drm/prime: fix up handle_to_fd ioctl return value

In

commit da34242e5e0638312130f5bd5d2d277afbc6f806
Author: YoungJun Cho <yj44.cho@samsung.com>
Date:   Wed Jun 26 10:21:42 2013 +0900

    drm/prime: add return check for dma_buf_fd

the failure case handling was fixed up. But in the case when we
already had the buffer exported it changed the return value:
Previously we've return 0 on success, now we return the fd.

This ABI change has been caught by i-g-t/prime_self_import/with_one_bo.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66436
Cc: YoungJun Cho <yj44.cho@samsung.com>
Cc: Seung-Woo Kim <sw0312.kim@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Tested-by: lu hua <huax.lu@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: add return check for dma_buf_fd
YoungJun Cho [Wed, 26 Jun 2013 01:21:42 +0000 (10:21 +0900)]
drm/prime: add return check for dma_buf_fd

The dma_buf_fd() can return error when it fails to prepare fd,
so the dma_buf needs to be put.

Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: reorder drm_prime_add_buf_handle and remove prototype
Seung-Woo Kim [Wed, 26 Jun 2013 01:21:41 +0000 (10:21 +0900)]
drm/prime: reorder drm_prime_add_buf_handle and remove prototype

Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: fix to put an exported dma_buf for adding handle failure
YoungJun Cho [Wed, 26 Jun 2013 01:21:40 +0000 (10:21 +0900)]
drm/prime: fix to put an exported dma_buf for adding handle failure

When drm_prime_add_buf_handle() returns failure for an exported
dma_buf, the dma_buf was already allocated and its refcount was
increased, so it needs to be put.

Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: support to cache mapping
Joonyoung Shim [Wed, 19 Jun 2013 06:03:05 +0000 (15:03 +0900)]
drm/prime: support to cache mapping

The drm prime also can support it like GEM CMA supports to cache
mapping. It doesn't allow multiple mappings for one attachment.

[airlied: rebased on top of other prime changes]
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: replace NULL with error value in drm_prime_pages_to_sg
YoungJun Cho [Mon, 24 Jun 2013 07:40:53 +0000 (16:40 +0900)]
drm/prime: replace NULL with error value in drm_prime_pages_to_sg

Instead of NULL, error value is casted with ERR_PTR() for
drm_prime_pages_to_sg() and IS_ERR_OR_NULL() macro is replaced
with IS_ERR() macro for drm_gem_map_dma_buf().

Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm/prime: fix to check return of dma_map_sg in prime helper
YoungJun Cho [Mon, 24 Jun 2013 06:34:21 +0000 (15:34 +0900)]
drm/prime: fix to check return of dma_map_sg in prime helper

The dma_map_sg(), in map_dma_buf callback operation of prime helper,
can return 0 when it fails to map, so it needs to release related
resources.

Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agodrm: move pinning/unpinning to buffer attach
Maarten Lankhorst [Tue, 9 Apr 2013 07:52:54 +0000 (09:52 +0200)]
drm: move pinning/unpinning to buffer attach

This allows importing bo's to own device to work without requiring that the buffer is pinned in GART.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Conflicts:
drivers/gpu/drm/drm_prime.c

Change-Id: I17dbe44549acdf570e155d11370752b0b4ab7919

9 years agodrm: add unpin function to prime helpers
Maarten Lankhorst [Tue, 9 Apr 2013 07:18:44 +0000 (09:18 +0200)]
drm: add unpin function to prime helpers

Prevents buffers from being pinned forever.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Conflicts:
drivers/gpu/drm/drm_prime.c

Change-Id: I220c7924a9b08a13646fcc43c80cd9c031dd2d79

9 years agoseqno-fence: Hardware dma-buf implementation of fencing (v6)
Maarten Lankhorst [Tue, 1 Jul 2014 10:57:20 +0000 (12:57 +0200)]
seqno-fence: Hardware dma-buf implementation of fencing (v6)

This type of fence can be used with hardware synchronization for simple
hardware that can block execution until the condition
(dma_buf[offset] - value) >= 0 has been met when WAIT_GEQUAL is used,
or (dma_buf[offset] != 0) has been met when WAIT_NONZERO is set.

A software fallback still has to be provided in case the fence is used
with a device that doesn't support this mechanism. It is useful to expose
this for graphics cards that have an op to support this.

Some cards like i915 can export those, but don't have an option to wait,
so they need the software fallback.

I extended the original patch by Rob Clark.

v1: Original
v2: Renamed from bikeshed to seqno, moved into dma-fence.c since
    not much was left of the file. Lots of documentation added.
v3: Use fence_ops instead of custom callbacks. Moved to own file
    to avoid circular dependency between dma-buf.h and fence.h
v4: Add spinlock pointer to seqno_fence_init
v5: Add condition member to allow wait for != 0.
    Fix small style errors pointed out by checkpatch.
v6: Move to a separate file. Fix up api changes in fences.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Rob Clark <robdclark@gmail.com> #v4
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
9 years agofence: dma-buf cross-device synchronization (v18)
Maarten Lankhorst [Tue, 1 Jul 2014 10:57:14 +0000 (12:57 +0200)]
fence: dma-buf cross-device synchronization (v18)

A fence can be attached to a buffer which is being filled or consumed
by hw, to allow userspace to pass the buffer without waiting to another
device.  For example, userspace can call page_flip ioctl to display the
next frame of graphics after kicking the GPU but while the GPU is still
rendering.  The display device sharing the buffer with the GPU would
attach a callback to get notified when the GPU's rendering-complete IRQ
fires, to update the scan-out address of the display, without having to
wake up userspace.

A driver must allocate a fence context for each execution ring that can
run in parallel. The function for this takes an argument with how many
contexts to allocate:
  + fence_context_alloc()

A fence is transient, one-shot deal.  It is allocated and attached
to one or more dma-buf's.  When the one that attached it is done, with
the pending operation, it can signal the fence:
  + fence_signal()

To have a rough approximation whether a fence is fired, call:
  + fence_is_signaled()

The dma-buf-mgr handles tracking, and waiting on, the fences associated
with a dma-buf.

The one pending on the fence can add an async callback:
  + fence_add_callback()

The callback can optionally be cancelled with:
  + fence_remove_callback()

To wait synchronously, optionally with a timeout:
  + fence_wait()
  + fence_wait_timeout()

When emitting a fence, call:
  + trace_fence_emit()

To annotate that a fence is blocking on another fence, call:
  + trace_fence_annotate_wait_on(fence, on_fence)

A default software-only implementation is provided, which can be used
by drivers attaching a fence to a buffer when they have no other means
for hw sync.  But a memory backed fence is also envisioned, because it
is common that GPU's can write to, or poll on some memory location for
synchronization.  For example:

  fence = custom_get_fence(...);
  if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
    dma_buf *fence_buf = seqno_fence->sync_buf;
    get_dma_buf(fence_buf);

    ... tell the hw the memory location to wait ...
    custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
  } else {
    /* fall-back to sw sync * /
    fence_add_callback(fence, my_cb);
  }

On SoC platforms, if some other hw mechanism is provided for synchronizing
between IP blocks, it could be supported as an alternate implementation
with it's own fence ops in a similar way.

enable_signaling callback is used to provide sw signaling in case a cpu
waiter is requested or no compatible hardware signaling could be used.

The intention is to provide a userspace interface (presumably via eventfd)
later, to be used in conjunction with dma-buf's mmap support for sw access
to buffers (or for userspace apps that would prefer to do their own
synchronization).

v1: Original
v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
    that dma-fence didn't need to care about the sw->hw signaling path
    (it can be handled same as sw->sw case), and therefore the fence->ops
    can be simplified and more handled in the core.  So remove the signal,
    add_callback, cancel_callback, and wait ops, and replace with a simple
    enable_signaling() op which can be used to inform a fence supporting
    hw->hw signaling that one or more devices which do not support hw
    signaling are waiting (and therefore it should enable an irq or do
    whatever is necessary in order that the CPU is notified when the
    fence is passed).
v3: Fix locking fail in attach_fence() and get_fence()
v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
    we decided that we need to be able to attach one fence to N dma-buf's,
    so using the list_head in dma-fence struct would be problematic.
v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
    about checking if fence fired or not. This is broken by design.
    waitqueue_active during destruction is now fatal, since the signaller
    should be holding a reference in enable_signalling until it signalled
    the fence. Pass the original dma_fence_cb along, and call __remove_wait
    in the dma_fence_callback handler, so that no cleanup needs to be
    performed.
v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
    fence wasn't signaled yet, for example for hardware fences that may
    choose to signal blindly.
v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
    header and fixed include mess. dma-fence.h now includes dma-buf.h
    All members are now initialized, so kmalloc can be used for
    allocating a dma-fence. More documentation added.
v9: Change compiler bitfields to flags, change return type of
    enable_signaling to bool. Rework dma_fence_wait. Added
    dma_fence_is_signaled and dma_fence_wait_timeout.
    s/dma// and change exports to non GPL. Added fence_is_signaled and
    fence_enable_sw_signaling calls, add ability to override default
    wait operation.
v10: remove event_queue, use a custom list, export try_to_wake_up from
    scheduler. Remove fence lock and use a global spinlock instead,
    this should hopefully remove all the locking headaches I was having
    on trying to implement this. enable_signaling is called with this
    lock held.
v11:
    Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
    However I kept the guarantee that after fence_signal returns, it is
    guaranteed that enable_signaling has either been called to completion,
    or will not be called any more.

    Add contexts and seqno to base fence implementation. This allows you
    to wait for less fences, by testing for seqno + signaled, and then only
    wait on the later fence.

    Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
    An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
    spam, and another runtime option can turn it off at runtime.
v12:
    Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
    and fence->seqno members.
v13:
    Fixup CONFIG_FENCE_TRACE kconfig description.
    Move fence_context_alloc to fence.
    Simplify fence_later.
    Kill priv member to fence_cb.
v14:
    Remove priv argument from fence_add_callback, oops!
v15:
    Remove priv from documentation.
    Explicitly include linux/atomic.h.
v16:
    Add trace events.
    Import changes required by android syncpoints.
v17:
    Use wake_up_state instead of try_to_wake_up. (Colin Cross)
    Fix up commit description for seqno_fence. (Rob Clark)
v18:
    Rename release_fence to fence_release.
    Move to drivers/dma-buf/.
    Rename __fence_is_signaled and __fence_signal to *_locked.
    Rename __fence_init to fence_init.
    Make fence_default_wait return a signed long, and fix wait ops too.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Conflicts:
drivers/base/Kconfig

Change-Id: Ie62c8c33a0cb7ca3df596f47ef328c33c4468139

9 years agodma-buf: move to drivers/dma-buf
Maarten Lankhorst [Tue, 1 Jul 2014 10:57:08 +0000 (12:57 +0200)]
dma-buf: move to drivers/dma-buf

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
9 years agodma-buf: update debugfs output
Sumit Semwal [Mon, 3 Feb 2014 09:39:12 +0000 (15:09 +0530)]
dma-buf: update debugfs output

Russell King observed 'wierd' looking output from debugfs, and also suggested
better ways of getting device names (use KBUILD_MODNAME, dev_name())

This patch addresses these issues to make the debugfs output correct and better
looking.

While at it, replace seq_printf with seq_puts to remove the checkpatch.pl
warnings.

Reported-by: Russell King - ARM Linux <linux@arm.linux.org.uk>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
9 years agoreservation: cross-device reservation support, v4
Maarten Lankhorst [Thu, 27 Jun 2013 11:48:16 +0000 (13:48 +0200)]
reservation: cross-device reservation support, v4

This adds support for a generic reservations framework that can be
hooked up to ttm and dma-buf and allows easy sharing of reservations
across devices.

The idea is that a dma-buf and ttm object both will get a pointer
to a struct reservation_object, which has to be reserved before
anything is done with the contents of the dma-buf.

Changes since v1:
 - Fix locking issue in ticket_reserve, which could cause mutex_unlock
   to be called too many times.
Changes since v2:
 - All fence related calls and members have been taken out for now,
   what's left is the bare minimum to be useful for ttm locking conversion.
Changes since v3:
 - Removed helper functions too. The documentation has an example
   implementation for locking. With the move to ww_mutex there is no
   need to have much logic any more.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
9 years agoRevert "dmabuf-sync: add buffer synchronization framework"
Chanho Park [Tue, 19 Aug 2014 12:40:44 +0000 (21:40 +0900)]
Revert "dmabuf-sync: add buffer synchronization framework"

This reverts commit 7a9958fedb90ef4000b6461d77a5c6dfd795c1c1.

9 years agoRevert "dmabuf-sync: add cache operation feature"
Chanho Park [Tue, 19 Aug 2014 12:40:35 +0000 (21:40 +0900)]
Revert "dmabuf-sync: add cache operation feature"

This reverts commit 22a2b813ad54d967edf9d8117662fea25093f7d0.

9 years agoRevert "dma-buf: add lock callback for fcntl system call."
Chanho Park [Tue, 19 Aug 2014 12:40:27 +0000 (21:40 +0900)]
Revert "dma-buf: add lock callback for fcntl system call."

This reverts commit 30d585606b85e454113b79478b6b6bb1991dd210.

9 years agoRevert "dmabuf-sync: fix sync lock to multiple read"
Chanho Park [Tue, 19 Aug 2014 12:40:19 +0000 (21:40 +0900)]
Revert "dmabuf-sync: fix sync lock to multiple read"

This reverts commit 5c6a3a47e9a5b4286e4219bd70e9917b8ffee414.

9 years agoRevert "dmabuf-sync: remove unnecessary the use of mutex lock."
Chanho Park [Tue, 19 Aug 2014 12:40:12 +0000 (21:40 +0900)]
Revert "dmabuf-sync: remove unnecessary the use of mutex lock."

This reverts commit c75e1e7a03b157842638e55b27f28c41a9a3dc2b.

9 years agoRevert "dmabuf-sync: add private backend callbacks"
Chanho Park [Tue, 19 Aug 2014 12:40:02 +0000 (21:40 +0900)]
Revert "dmabuf-sync: add private backend callbacks"

This reverts commit 5d2749a0ac3be2a3ed43a24a88d821e26097bf1e.

9 years agoRevert "dmabuf-sync: add select system call support."
Chanho Park [Tue, 19 Aug 2014 12:39:54 +0000 (21:39 +0900)]
Revert "dmabuf-sync: add select system call support."

This reverts commit 4439a419906d4fe3d7e5093292bd2f4f4fbfc8c2.

9 years agoRevert "dma-buf: return POLLIN | POLLOUT instead of POLLERR"
Chanho Park [Tue, 19 Aug 2014 12:39:46 +0000 (21:39 +0900)]
Revert "dma-buf: return POLLIN | POLLOUT instead of POLLERR"

This reverts commit 494805a828f760a3b36629875cc123cc1e396aa8.

9 years agoRevert "dmabuf-sync: update it to patch v8"
Chanho Park [Tue, 19 Aug 2014 12:39:36 +0000 (21:39 +0900)]
Revert "dmabuf-sync: update it to patch v8"

This reverts commit cf7e07ce2d9843105d2ed8f9d30ee66c06d83bb0.

9 years agosensorhub: add sentinel into array to fix out-of-bound memory access 67/30567/2
Seung-Woo Kim [Thu, 20 Nov 2014 08:25:03 +0000 (17:25 +0900)]
sensorhub: add sentinel into array to fix out-of-bound memory access

Without sentinel, of_match_node() to array causes out-of-bound
memory access. So this patch adds sentinel into ssp_of_match.

Change-Id: I66d69b10f6e96ceb0a874554249317416c58471d
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
9 years agortc: s5m: Add sentinel into array to fix out-of-bound memory access 66/30566/2
Seung-Woo Kim [Thu, 20 Nov 2014 08:24:19 +0000 (17:24 +0900)]
rtc: s5m: Add sentinel into array to fix out-of-bound memory access

Without sentinel in array of platform_device_id, platform_match()
causes out-of-bound memory access. So this patch adds sentinel into
s5m_rtc_id.

Change-Id: I65af741dd117d017ccf03bd8ad833fa4b165ab9b
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
9 years agoiio: st_gyro: Add sentinel into array to fix out-of-bound memory access 65/30565/2
Seung-Woo Kim [Thu, 20 Nov 2014 07:45:47 +0000 (16:45 +0900)]
iio: st_gyro: Add sentinel into array to fix out-of-bound memory access

Without sentinel, of_match_node() to array causes out-of-bound
memory access. So this patch adds sentinel.

Change-Id: I22a4c117f68bba05acc27e7b4c6ad86471e6cf6d
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
9 years agoARM: EXYNOS: Add sentinel into array to fix out-of-bound memory access 64/30564/2
Seung-Woo Kim [Wed, 19 Nov 2014 04:08:33 +0000 (13:08 +0900)]
ARM: EXYNOS: Add sentinel into array to fix out-of-bound memory access

An array exynos_pinctrl_ids does not have sentinel, but it is used
by of_match_node(). This cause out-of-bound memory access, so this
patch adds sentinel into exynos_pinctrl_ids.

Change-Id: Ic3f5cb4bcc41baa27d92ec7e9386adc4a80b813a
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
9 years agopackaging: update to 3.10.60 v3.10.60-rebase
Chanho Park [Tue, 18 Nov 2014 03:03:39 +0000 (12:03 +0900)]
packaging: update to 3.10.60

Change-Id: Icf9f18a293de098cc93b863a49591660807b51bc
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
9 years agodrm/exynos: use irq_flags instead of triggering
Joonyoung Shim [Thu, 30 Oct 2014 04:36:26 +0000 (13:36 +0900)]
drm/exynos: use irq_flags instead of triggering

The drm_handle_vblank should be called whenever be vsync, te interrupt
means vsync on i80 interface.

Change-Id: I620346fc78e02589b398a0aaee74a2eb60579720
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
9 years agopackaging: separate modules.img
Chanho Park [Thu, 6 Nov 2014 10:12:52 +0000 (19:12 +0900)]
packaging: separate modules.img

This patch separates modules.img from /boot. We'll add a new partition
for modules.

Change-Id: I50ef63dfebacd389c96d7e012970f1a1e9796125
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
9 years agogpu: arm: mali400: fix buile warning
Joonyoung Shim [Thu, 6 Nov 2014 04:45:29 +0000 (13:45 +0900)]
gpu: arm: mali400: fix buile warning

drivers/gpu/arm/mali400/mali/platform/exynos4/exynos4.c: In function ‘mali_platform_init’:
drivers/gpu/arm/mali400/mali/platform/exynos4/exynos4.c:346:2: warning: ignoring return value of ‘regulator_enable’, declared with attribute warn_unused_result [-Wunused-result]
  regulator_enable(mali->vdd_g3d);
  ^

Change-Id: I72d60e94e2f7380fcc7a457cf487368037d30b7a
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
9 years agogpu: arm: mali400: add exynos3250 support for R4P0_REL0
Joonyoung Shim [Thu, 6 Nov 2014 04:55:17 +0000 (13:55 +0900)]
gpu: arm: mali400: add exynos3250 support for R4P0_REL0

This is based on drivers/gpu/arm/mali400/mali/platform/exynos4/exynos4.c

Change-Id: Ib030f2463648983f3855d827d022c15ad3a76ab9
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
9 years agogpu: arm: mali400: fix warning on booting
Joonyoung Shim [Fri, 1 Aug 2014 09:38:41 +0000 (18:38 +0900)]
gpu: arm: mali400: fix warning on booting

This fixes below warnings.

[    1.461424] ------------[ cut here ]------------
[    1.464524] WARNING: at fs/proc/generic.c:101 __xlate_proc_name+0xa8/0xbc()
[    1.471444] name '/gpu@13000000'
[    1.474651] Modules linked in:
[    1.477678] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.10.39-01649-g6491ddb-dirty #15
[    1.485623] [<c0014224>] (unwind_backtrace+0x0/0xf4) from [<c0011520>] (show_stack+0x10/0x14)
[    1.494120] [<c0011520>] (show_stack+0x10/0x14) from [<c001ff7c>] (warn_slowpath_common+0x54/0x6c)
[    1.503050] [<c001ff7c>] (warn_slowpath_common+0x54/0x6c) from [<c001ffc4>] (warn_slowpath_fmt+0x30/0x40)
[    1.512598] [<c001ffc4>] (warn_slowpath_fmt+0x30/0x40) from [<c010774c>] (__xlate_proc_name+0xa8/0xbc)
[    1.521886] [<c010774c>] (__xlate_proc_name+0xa8/0xbc) from [<c01077ac>] (__proc_create+0x4c/0x100)
[    1.530910] [<c01077ac>] (__proc_create+0x4c/0x100) from [<c0107b18>] (proc_mkdir_data+0x2c/0x68)
[    1.539766] [<c0107b18>] (proc_mkdir_data+0x2c/0x68) from [<c007b72c>] (register_handler_proc+0xd8/0xf0)
[    1.549233] [<c007b72c>] (register_handler_proc+0xd8/0xf0) from [<c0077b38>] (__setup_irq+0x1e4/0x440)
[    1.558525] [<c0077b38>] (__setup_irq+0x1e4/0x440) from [<c0077ec4>] (request_threaded_irq+0xa8/0x128)
[    1.567817] [<c0077ec4>] (request_threaded_irq+0xa8/0x128) from [<c02a6f34>] (_mali_osk_irq_init+0x64/0x124)
[    1.577628] [<c02a6f34>] (_mali_osk_irq_init+0x64/0x124) from [<c02afa58>] (mali_pp_create+0xac/0x23c)
[    1.586899] [<c02afa58>] (mali_pp_create+0xac/0x23c) from [<c02aa94c>] (mali_initialize_subsystems+0x21c/0x7d8)
[    1.596966] [<c02aa94c>] (mali_initialize_subsystems+0x21c/0x7d8) from [<c02ab854>] (mali_probe+0x3c/0x254)
[    1.606697] [<c02ab854>] (mali_probe+0x3c/0x254) from [<c02bd314>] (driver_probe_device+0x88/0x244)
[    1.615715] [<c02bd314>] (driver_probe_device+0x88/0x244) from [<c02bd5a0>] (__driver_attach+0x8c/0x90)
[    1.625089] [<c02bd5a0>] (__driver_attach+0x8c/0x90) from [<c02bb8f0>] (bus_for_each_dev+0x60/0x94)
[    1.634121] [<c02bb8f0>] (bus_for_each_dev+0x60/0x94) from [<c02bcb68>] (bus_add_driver+0x1c0/0x24c)
[    1.643232] [<c02bcb68>] (bus_add_driver+0x1c0/0x24c) from [<c02bdb78>] (driver_register+0x78/0x140)
[    1.652345] [<c02bdb78>] (driver_register+0x78/0x140) from [<c02abaa4>] (mali_module_init+0xc/0x50)
[    1.661370] [<c02abaa4>] (mali_module_init+0xc/0x50) from [<c000870c>] (do_one_initcall+0x108/0x158)
[    1.670505] [<c000870c>] (do_one_initcall+0x108/0x158) from [<c07bec54>] (kernel_init_freeable+0x13c/0x1dc)
[    1.680214] [<c07bec54>] (kernel_init_freeable+0x13c/0x1dc) from [<c05a05bc>] (kernel_init+0xc/0x160)
[    1.689411] [<c05a05bc>] (kernel_init+0xc/0x160) from [<c000df58>] (ret_from_fork+0x14/0x3c)
[    1.697837] ---[ end trace e694d4bb842a349f ]---

Change-Id: Ic2b9ef0388f929e5d028ccd3b10882aecc9c815e
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
9 years agogpu: arm: mali400: modify DVFS tables and setting
Joonyoung Shim [Thu, 6 Nov 2014 04:29:27 +0000 (13:29 +0900)]
gpu: arm: mali400: modify DVFS tables and setting

This comes from commit ("local/ARM/MALI400: R4P0_REL0: Clean up codes")
of in-house kernel.

Change-Id: Id835808d157438527d32b3f3cf46108ab8d45fb6
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
9 years agogpu: arm: mali400: fix clocks for R4P0_REL0
Joonyoung Shim [Tue, 4 Nov 2014 05:30:54 +0000 (14:30 +0900)]
gpu: arm: mali400: fix clocks for R4P0_REL0

It needs clock setting change by the commit 0a88cf3 ("ARM: dts:
exynos4x12: clean up clock property for gpu node").

Change-Id: I18d40e382c152a1a7c9dcd3077cbc2a83b265485
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
9 years agogpu: arm: mali400: use exynos platform of R3P2_REL0 for R4P0_REL0
Joonyoung Shim [Fri, 1 Aug 2014 09:34:10 +0000 (18:34 +0900)]
gpu: arm: mali400: use exynos platform of R3P2_REL0 for R4P0_REL0

It's better R3P2_REL0 exynos platform codes than R4P0_REL0.

Change-Id: Ia97c1b800a209a98a860f533e5617efebcf3e600
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
9 years agogpu: arm: mali400: remove unnecessary #if 0 for R4P0_REL0
Joonyoung Shim [Thu, 6 Nov 2014 02:00:02 +0000 (11:00 +0900)]
gpu: arm: mali400: remove unnecessary #if 0 for R4P0_REL0

I don't know why added it.

Change-Id: I60ecbc113d4a9fb1e32971bd249f1fe167b0c2d9
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
9 years agoARM/MALI400: R4P0_REL0: port with exynos common platform
YoungJun Cho [Wed, 11 Jun 2014 01:08:09 +0000 (10:08 +0900)]
ARM/MALI400: R4P0_REL0: port with exynos common platform

This patch ports mali400 r4p0 rel0 with exynos common platform.

Change-Id: I741c2fbd76cbb7177472ea61b40d7fab22a7c081
Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
9 years agogpu: arm: Add mali400 r4p0_rel0 version
Joonyoung Shim [Fri, 1 Aug 2014 07:35:19 +0000 (16:35 +0900)]
gpu: arm: Add mali400 r4p0_rel0 version

This comes from in-house kernel.

Change-Id: Ic3f3516e44e71ea9ca2e0b5caa6a9e836ffa599c
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
9 years ago[media] s5p-mfc: Adjust memports handling to MFC v7 needs.
Jacek Anaszewski [Thu, 23 Oct 2014 12:49:42 +0000 (14:49 +0200)]
[media] s5p-mfc: Adjust memports handling to MFC v7 needs.

MFC v7 supports only one memory interface. Adjust memory ports
initialization accordingly.

Change-Id: I56e2c582c41f9ad948dc612b3060688619195b1c
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
9 years ago[media] s5p-mfc: Update driver for v7 firmware
Arun Kumar K [Tue, 9 Jul 2013 04:24:39 +0000 (01:24 -0300)]
[media] s5p-mfc: Update driver for v7 firmware

Firmware version v7 is mostly similar to v6 in terms
of hardware specific controls and commands. So the hardware
specific opr_v6 and cmd_v6 are re-used for v7 also. This patch
updates the v6 files to handle v7 version also.

Change-Id: I137075c6802cfef3aa40cb45413837f18fa969eb
Signed-off-by: Arun Kumar K <arun.kk@samsung.com>
Signed-off-by: Kamil Debski <k.debski@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
9 years ago[media] s5p-mfc: Core support for MFC v7
Arun Kumar K [Tue, 9 Jul 2013 04:24:38 +0000 (01:24 -0300)]
[media] s5p-mfc: Core support for MFC v7

Adds variant data and core support for the MFC v7 firmware

Change-Id: I5dc12438d3bfdf6d254f4ced3089e1881d524e0b
Signed-off-by: Arun Kumar K <arun.kk@samsung.com>
Signed-off-by: Kamil Debski <k.debski@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
9 years ago[media] s5p-mfc: Add register definition file for MFC v7
Arun Kumar K [Tue, 9 Jul 2013 04:24:37 +0000 (01:24 -0300)]
[media] s5p-mfc: Add register definition file for MFC v7

The patch adds the register definition file for new firmware
version v7 for MFC. New firmware supports VP8 encoding along with
many other features.

Change-Id: I3abf2768fe2a59ec45f6f4a2660c3ccf23f7ca88
Signed-off-by: Arun Kumar K <arun.kk@samsung.com>
Signed-off-by: Kamil Debski <k.debski@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
9 years ago[media] s5p-mfc: Rename IS_MFCV6 macro
Arun Kumar K [Tue, 9 Jul 2013 04:24:36 +0000 (01:24 -0300)]
[media] s5p-mfc: Rename IS_MFCV6 macro

The MFC v6 specific code holds good for MFC v7 also as
the v7 version is a superset of v6 and the HW interface
remains more or less similar. This patch renames the macro
IS_MFCV6() to IS_MFCV6_PLUS() so that it can be used
for v7 also.

Change-Id: Ia27f4ed36cc46568bbe9152f13332436f04e106a
Signed-off-by: Arun Kumar K <arun.kk@samsung.com>
Signed-off-by: Kamil Debski <k.debski@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
9 years agoARM: dts: exynos3250-rinato: add MFC codec device node
Jacek Anaszewski [Wed, 22 Oct 2014 08:59:10 +0000 (10:59 +0200)]
ARM: dts: exynos3250-rinato: add MFC codec device node

This patch adds mfc codec device tree node.

Change-Id: I6d4ef65b1c518ddcace691c23aded8e0797193e4
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
9 years agoARM: dts: exynos3250: add MFC codec device node
Jacek Anaszewski [Wed, 22 Oct 2014 08:56:45 +0000 (10:56 +0200)]
ARM: dts: exynos3250: add MFC codec device node

This patch adds mfc codec device tree node and the corresponding IOMMU
device node.

Change-Id: I8ea6b68b92fe035ec947cc5319b0cd1d070764d0
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
9 years ago[media] media: s5p-mfc: rename special clock to sclk_mfc
Marek Szyprowski [Wed, 27 Aug 2014 12:36:28 +0000 (09:36 -0300)]
[media] media: s5p-mfc: rename special clock to sclk_mfc

Commit d19f405a5a8d2ed942b40f8cf7929a5a50d0cc59 ("[media] s5p-mfc: Fix
selective sclk_mfc init") added support for special clock handling
(named "sclk-mfc"). However this clock is not defined yet on any
platform, so before adding it to all Exynos platform, better rename it
to "sclk_mfc" to match the scheme used for all other special clocks on
Exynos platform.

Change-Id: I41f646096e8a82c3cca032e1cc7a70f6d2960059
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kamil Debski <k.debski@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
9 years ago[media] s5p-mfc: Fix selective sclk_mfc init
Jacek Anaszewski [Thu, 10 Jul 2014 09:00:39 +0000 (06:00 -0300)]
[media] s5p-mfc: Fix selective sclk_mfc init

fc906b6d "Remove special clock usage in driver" removed
initialization of MFC special clock, arguing that there's
no need to do it explicitly, since it's one of MFC gate clock's
dependencies and gets enabled along with it. However, there's
no promise of keeping this hierarchy across Exynos SoC
releases, therefore this approach fails to provide a stable,
portable solution.

Out of all MFC versions, only v6 doesn't use special clock at all.
For other versions log a message only in case clk_get fails,
as not all the devices with the same MFC version require
initializing the clock explicitly.

Change-Id: Id5ee2696c7b880f45f9744b6bac603dcee9e3dcb
Signed-off-by: Mateusz Zalega <m.zalega@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Kamil Debski <k.debski@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
9 years agoRevert "media: s5p-mfc: add to set clock rate"
Jacek Anaszewski [Wed, 5 Nov 2014 15:03:30 +0000 (16:03 +0100)]
Revert "media: s5p-mfc: add to set clock rate"

This reverts commit 2cbd58556a83b417750483de842e1e918de273a3.

Mainline commit d19f405a "Fix selective sclk_mfc init"
solves the issue in a wider scope.

Change-Id: Ib163697c3ae65e30b6e13f6f7170d791d853a6f0
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
9 years ago[media] s5p-jpeg: fix HUF_TBL_EN bit clearing path
Jacek Anaszewski [Mon, 1 Sep 2014 13:05:52 +0000 (10:05 -0300)]
[media] s5p-jpeg: fix HUF_TBL_EN bit clearing path

Use proper bitwise operator while clearing HUF_TBL_EN bit.

Change-Id: Ic78dd26168ffa6124d61f8cb9549339f05cff0d9
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
Signed-off-by: Kamil Debski <k.debski@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
9 years ago[media] s5p-jpeg: avoid overwriting JPEG_CNTL register settings
Jacek Anaszewski [Mon, 1 Sep 2014 13:05:51 +0000 (10:05 -0300)]
[media] s5p-jpeg: avoid overwriting JPEG_CNTL register settings

Take into account the JPEG_CNTL register value read before
setting SYS_INT_EN bit field.

Change-Id: I76b622f01be6747ea2ad95e63fb305377b0f540b
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
Signed-off-by: Kamil Debski <k.debski@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
9 years ago[media] s5p-jpeg: remove stray call to readl
Jacek Anaszewski [Mon, 1 Sep 2014 13:05:50 +0000 (10:05 -0300)]
[media] s5p-jpeg: remove stray call to readl

There is no need to read INT_EN_REG before enabling interrupts.

Change-Id: Idebb919754df34fb2bfa53982a4ea0a7be3f1fe7
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
Signed-off-by: Kamil Debski <k.debski@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
9 years ago[media] s5p-jpeg: Avoid assigning readl result
Jacek Anaszewski [Mon, 1 Sep 2014 13:05:49 +0000 (10:05 -0300)]
[media] s5p-jpeg: Avoid assigning readl result

Avoid gcc warning when -Wunused-but-set-variable is enabled.
The readl return value need not to be assigned to any variable
as the reading itself is just a part of a sequence required
for clearing the interrupt flag.

Change-Id: I09b9ec4a724ae46eca0491d81003cd0c0f714ad2
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
Signed-off-by: Kamil Debski <k.debski@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>